top of page

Cause: "Human Error."​ Overcoming Retribution and Discovering Learning

We know the story all too well; we’ve heard it a thousand times. Something bad happens, it is investigated, and we arrive at the obvious cause – the verdict? Human error. Someone broke a rule or violated a procedure, they lost "situational awareness," someone didn't pay enough attention; if they had only tried harder or cared more then nothing bad would have happened. It all seems so simple; seductively simple - "You can't fix stupid!"


The normal view of “human error” leads us down the path of blame. We single out poor performing employees, seek out evidence of poor behavior and misjudgments; ultimately bringing to light peoples bad decisions, inaccurate assessments, and rule deviations. We seek out error, we find error, and we make the flawed assumption that the negative outcome occurred because of the presence of error. 


This leads us to declare a war on error; we weaponize our systems. We blame and punish in attempts to "fix" the workers or to at least teach them a lesson, we tell everyone to try harder, we standardized and try to simply complexity, we write another rule, we invoke zero tolerance polices around our most sacred of procedures, we retrain, we conduct stand-downs, we read the rules (new and old) louder and slower, and we hope and pray things get better. It's a tale we've heard many, many times before.  We end up with this story for a few primary reasons - mostly related to assumptions and bias:


We persist in the assumption that error is a choice - But, error is not intentional, it is a by-product of the system or setting. All events were unexpected to those that were involved.


It's in our nature to label and blame - it's easier to label and blame than it is to learn and improve. But, this only gives us the illusion that we have addressed the problem. In reality, we have only exonerated the organization from deeper action and learning.


Post event, we do not recognize that we are retrospective outsiders - Bias drives us to use the outcome of events to assess the quality of the decisions and actions that led up to them. We must not allow hindsight to influence our understanding of the context that surrounded the event; we must avoid mixing elements of our own realities into the realities that surrounded those involved. Remember, everything made sense to those involved, until suddenly it didn't.


We desire to make complexity linear - The problem is that failure does not occur in a linear fashion, but we have responded using linear investigatory techniques such as root cause analysis. Fixing a “root cause” alone will not prevent a future event from occurring and singling out a root cause (or a small number of causes) promotes a flawed reductionist view.


We define "Safe" as the absence of negatives such as errors or events - In reality, safety is not the absence of errors or other negatives, it’s the presence of defenses or capacities that allow us to operate in the face of negatives or errors.


So, what does that mean?

"In real life, systems are not inherently safe, people create safety through practice. People do their best to reconcile different goals simultaneously – Efficiency, production, quality, safety, and etc… Work gets done safely and productively – not because people follow the rules or prescriptive procedures, but because they have learned how to make trade-offs and adapt their problem solving to the inevitable complexities and contradictions of their real world. They typically do this so smoothly, so effectively, that the underlying work doesn’t even go noticed or appreciated – organizationally we persist in our belief that the work went well because people must have listened and followed the rules." - Sidney Dekker

Traditionally, we have maintained our belief that strict and unwavering compliance is critical to the prevention of bad outcomes. In real life, safe or non-eventful outcomes are produced not by religious compliance to rules and procedures; they are produced by real-time negotiations, trade offs, and adaptations on the pointy end. Safety is created, in real-time, by those that perform the work.



Traditional risk management and safety approaches have heavily emphasized error counts, tabulation and prediction. We have also persisted in the assumption that systems are fine; simply needing protection from unreliable humans to ensure success. We then attempt to protect these systems by introducing more rules, more training, more surveillance, heavier compliance demands, and heavier consequences for non-compliance - we create more bureaucracy. But, do rules and compliance demands create safety?


Here's a little food for thought: the Australian Institute of Health Innovation, performed studies into the knowledge of guidelines and procedures by healthcare staff; the focus was primarily on the increasing bureaucratization within healthcare. They found that the average nurse had to consider 600 different guidelines and procedures every single day. When questioned about their knowledge of these policies and rules, the average nurse was only familiar with 3 (if that) of the 600 requirements in place. Some key points here:

  • Rules do not create safety

  • Attempts to "dumb down" complexity are ineffective

  • Practitioners will auto-filter for importance; typically focusing only on whats really important.

So, the creation of non-eventful outcomes is not produced by more rules, tighter over site and surveillance; it is not created by more counting, more procedures, or more compliance - it is not created by doing the same things we have always done, harder. Systems will always have to meet multiple opposing and competing goals, at the same time, with limited or lacking resources. Only people can reconcile these conflicts; people create safety through practice.

"Saying an event was caused by error or not following procedure is like saying an object fell due to gravity…its always true, it just doesn’t tell us anything ." - Todd Conklin

Safety is a dynamic non-event – when nothing is happening, it is because a lot is happening.  Highly successful or highly reliable groups have not eliminated error; they do not achieve desirable outcomes by strict compliance with the rules and by "not messing up." Safety is not the absence of error; safety is existence of positive capacities. Capacities that allow these highly successful groups to generate reliable outcomes even in the face of negatives such as human error.



Human error, misjudgments, rule deviations, non-compliance, and all of the normal juicy things we have typically labeled as "causes" also exist in successful work. Yes, there was human error; someone messed up, someone missed something or they forgot something; they were human. That error, misjudgment, miscommunication, or miscalculation was simply a surface-level triggering event of something much deeper within the system. "Human error" is not very learning-rich path; focusing on the system and setting is where we will find deep and meaningful learning opportunities.


Disciplining human error doesn't fix anything (it only makes things worse)

Declaring a war on error leads us to target that which produces errors - people. These wars on error often cite the need for accountability for error producers. Many organizations currently believe that accountability means "holding people accountable." This is primarily due to flawed assumptions (usually deeply rooted organizational and societal assumptions) around human error. We usually accomplish this "accountability" through discipline – through administration of "the stick" to set an example, send a message, to protect the team or the company – we ultimately do this because we believe discipline is a behavioral modification tool and is a method of fixing flawed employees. But, does it actually do that? I don’t think so, at least not very effectively.

Think about this – usually, if we are punished for doing XYZ, we don’t stop doing XYZ, we try to avoid getting caught doing XYZ. (think speeding ticket or texting while driving).  Along with not working very well in general, disciplining an adult workforce around error has some serious negative side-effects. We can create a climate of Us v. Them, it causes the development of parent-child relationships, we get less engagement, less real deal discussion, less and less trust and openness; well intended efforts that result in not-so-great unintended by-products. We end up driving people to spend more time hiding behaviors than doing what would be a lot more useful - talking about them openly and learning from them.



As we become more evolved in our thinking around human error, we also begin to evolve our thinking around discipline and human error. We start to realize that discipline should designed to fairly remove someone from our system, not used to “teach someone a lesson.” We stop seeing discipline as a tool for extracting flesh from those pesky wrongdoers; as an instrument of retribution. With all of this deeper understanding, we eventually find ourselves asking deeper and deeper questions: Should our companies be in the culpability or blame business at all? Is it our place to administer retribution, to meet hurt with hurt, to seek out an eye for an eye? So here’s,my take on it, specifically around human error. Our work is inherently dangerous and the consequences can be severe; usually up to and including death. I think death is a pretty good motivator to not do something, so I don’t really believe we need to be in the business of manufacturing consequences. Our manufactured consequences (disciplinary actions) will never top death. I don't believe we need to be in the retribution business at all; I think we should be in the learning business – we need to be in the business of understanding why everything made sense until suddenly it didn’t. Anything that gets in the way of learning, including the need for retribution, is not good for us, our companies, or our people.

"The single greatest impediment to error prevention in the medical industry is that we punish people for making mistakes.” - Dr. Lucian Leape

A better path

Human error is a window on a deeper problem within the system and a marker in the systems every day behavior; error is a byproduct of human innovation and a possible indication of deeper system trouble. The fact that there was an error or misjudgment, finding that someone "lost situational awareness," discovering that a rule was bent or broken; that is the least learning rich and most boring part of any event.


We only have two choices post failure – blame and retrain or learn and improve. Moving past blame drives us towards the latter. After an event occurs we must ask better questions; questions that move us past the biases we have towards error and blame. We learn that an error was made, we learn that a rule was broken, but we dig deeper and learn how the error was made and why the rule was broken. We recognize that error is normal and we begin to shift towards a more restorative approach - asking “What went wrong?” rather than “Who caused the problem?” We understand that blame will fix nothing, we acknowledge that we would have probably made a similar error or broken the same rule if we were doing the same job, we acknowledge that the failure will be repeated unless we improve the setting or system. As a byproduct our employees become our biggest asset in improving the system, our relationships better, and we end up with a long list of possible improvement actions.

31 views0 comments

Recent Posts

See All
bottom of page