The Evolution from Model to Measured Success
Simple Models Are Seductive
A good friend of mine once observed: “All analogies break down eventually. Otherwise, they wouldn’t be analogies.” Yet, there’s something in us that seeks to simplify the world through metaphor. The earth is flat like a pancake. No wait, the world is round but the sun revolves around us, making the beautiful music of the spheres. "Not so," said Copernicus, and later, Galileo, who hypothesized that the earth and planets revolved around the sun in precise patterns. Then Newton described the force of gravitational attraction and calculated these measurable orbits, and later Einstein offered us the more sophisticated theory of general relativity, and then quantum gravitation, and so on. Since then, scientists the world over have pursued something called a unified field theory, or what some have called a theory of everything. What seems to be common to all of these approaches is the desire to explain the world using a simple model. It’s a powerful, if often elusive, attraction.
Simple Organizational Models Can Be Misleading
Interestingly, the field of organizational improvement has evolved in a similar way. When James Reason released two of his landmark books A Systems Approach to Organizational Error (1995) and Managing the Risk of Organizational Accidents (1997), safety professionals across the globe stood up and cheered, grateful for an academic model to explain how accidents occur: the now famous Swiss cheese model. Prior to this, most of us were stuck with an unsatisfying description of causal relationships, heretofore described as “links in a chain.” Easy to understand – all you have to do is remove one link and the causal chain is broken and the problem is solved. Only it wasn’t. Which link should we remove? All of them? Isn’t there a solitary “root cause or link” that has to be identified? Reason’s model brought to light a three-dimensionality to the one-dimensional model. Accidents were the result of latent failures and hazards existing in our system, like holes in Swiss cheese. (Similarly, the Broken Windows theory of crime prevention offered much the same attraction in the law enforcement arena.) All we have to do is plug the holes and now the problem is finally solved. But which holes do we plug first? All of them? Or all that we can observe? Slicing a bit deeper, the question becomes – are latent hazards and conditions only the result of faulty systems? What role does human behavior play in this culinary model?
Cultural Models, Too, Can Be Seductive
Then, based on the pioneering work of David Marx and others, the term Just Culture arose and became personified to explain that humans are predictively fallible, and prone to not only errors (i.e., inadvertent actions) but also risky choices, some of which are at-risk, some reckless. The Just Culture model gave us an Algorithm, or flow chart-based strategy as a tool to stand in judgment of the predictably fallible human component in our system. But in essence, workplace justice lays the foundation for our ability to learn from the experiences of frontline employees. To put it another way, learning from our mistakes depends on the employee’s willingness to come forward, and this willingness depends on the employee’s perceptions of what the likely response will be. And so the “non-punitive” reporting programs became popular as a mechanism for risk identification. Somewhere along this evolutionary path the term “safety culture” became synonymous with “non-punitive” reporting and then became intertwined with notions of Just Culture.
It’s easy to understand how Just Culture came to be seen as "non-punitive" in light of the “swinging pendulum” effect – which is what happens when people within an industry or organization perceive the culture to have been overly punitive in the past, therefore it must be better to swing the other direction and promote “blame-free” learning. The truth is that Just Culture was never intended to be “non-punitive” in the first place. No effective behavioral managment approach can be complete without the powerful deterrent effect of punishment, when properly applied. It’s just that punishment has often been overused by many organizations and regulators in the past.
Abraham Maslow perhaps said it best – “When the only tool in your hand is a hammer, the whole world starts to look like a nail.” Or words to that effect. So swinging the pendulum may have seemed like a good idea at the time.
A Balanced Approach Is Needed
But of course, it’s commonly accepted today that a balance must be struck between an overly punitive environment and a blame-free approach. In this respect, interpretations of Just Culture were helpful to many in striving for that balance of justice in the workplace. The problem, however, was that finding that balance alone was no guarantee of producing better outcomes. In short, workplace justice only offers the opportunity for learning and risk management, not a guarantee. And besides, information collected from humans does have its limitations. In aviation for example, many of the improvements in preventing accidents have come not from the human perspective, but rather from the digital data and analyses stemming from Flight Operational Quality Assurance (FOQA) programs. Of course, programs such as Aviation Safety Action Partnership (ASAP) and Line Operations Safety Audits (LOSA) provided the human perspective, complementing and enhancing our understanding of risk. (It’s been said that FOQA may tell you what happened during a flight event, but only the human perspective can tell you why it may have happened.) To sum this up, optimal learning requires a comprehensive perspective on risk identification, which often comes only when humans feel safe to come forward in a Just Culture-like environment.
So where does all this evolutionary thought leave us with respect to our approach to organizational improvement? The short answer is that both workplace justice and learning systems will fall short of producing better outcomes in isolation. The dilemma is this: once you have established workplace justice, and once you have a set of comprehensive learning systems, what then is to be done to prevent accidents? The answer is straightforward, if deceptively simple: design better systems and improve human behavioral choices. Easier said, than done. In terms of a graphical model, an improved representation of adverse events is illustrated by the socio-technical probabilistic risk assessment model, which is a fault tree illustration of Boolean logic gates describing cut-sets, or probabilistic pathways to specific classes of failure modes. Managing the socio-technical system (i.e., the hardware, software, processes and tools, combined with humans as components) requires a balanced approach to systems engineering design, behavioral psychology, neuroscience, and workplace justice.
Collaboration Is the Strategy for Better Outcomes
And the secret to putting all this together lies in collaboration. What we mean by this is the following:
stewardship of the limited resources we must manage to produce the outcomes we desire,
alignment between all stakeholders in the organizational mission and values we strive to protect,
cooperation between the humans driven by individual pursuits of happiness and perceptions of risk, and
integration of the systems, behaviors, programs, and models we develop to learn about and manage risk.
All of this is to say that we have chosen another metaphor to illustrate our model: a honeycomb structure, each hexagonal piece (designed or evolved?) to fit perfectly with the other pieces, each holding firm as a keystone to the entire structure. Is this model incomplete? Yes. Is this model useful? Let us share with you the power of this approach. Call it what you will – safety culture, Just Culture, socio-technical design. We’ll call it collaboration, and together we’ll demonstrate the art and science of improved organizational outcomes.