David Slater July 16, 2021
Steve Shorrock and his co-authors have produced a very timely review of the way the “science” has somehow perceived “human error” as the predominant cause of accidents. They follow the journey from its early origins to where we are today, where the concept is identified with a major industry in identifying key “factors” and applying prescriptive solutions to eliminate it.
They point out that a key point in that journey was the NATO Conference on Human Error organised by Neville Moray and John Senders in 1983, which provided a key forum for discussion and incubator of ideas.
A key catalyst at that conference has shared with me a photograph of the people present, which shows all the key “usual suspects” and contributors. I thought that the “history” would benefit from including this to recognise and emphasise the human element – real people not “artificial intelligence?
You will recognise the names if not the faces?
Standing from left: John Wreathall, X, X, Willem Wagenaar, Bill Rouse, David Woods, McRuer, X, X, Don Norman, David Embrey, Giuseppe Mancini, Beth Loftus
On the couches from left: X, Alan Swain, Jim Reason, Neville Moray, John Senders (in the chair), Martine(?), Jens Rasmussen, Tom Sheridan, Erik Hollnagel, X.
Moray and Senders finally wrote it up in 1991 (Senders, J. W., & Moray, N. P. Human Error: Cause, Prediction, and Reduction. CRC Press.) and quoted Hollnagel’s “evolving perspective” that has taken some thirty years to emerge as mainstream.
As the review points out-
“A position paper submitted by Woods (1983) at this NATO Meeting), called for the need to look ‘behind human error’ and to consider how design (actually?) leads to ‘system-induced errors’ (Weiner, 1977) rather than ‘human errors”.( In the meeting), Hollnagel (1983) questioned the existence of human error as a phenomenon and called for a focus instead on understanding decision making and action in a way that accounts for performance variability.”
Again to quote Hollnagel-
“The detection of this mismatch (actual vs expected outcomes?), is thus the observational basis for inferring the existence of a “Human Error”. It should be noted that if there is no observed mismatch, there will be no reason to look for a cause. Variations in performance do not necessarily lead to undesired outcomes, hence mismatches. They may, for instance, be detected and corrected by the system at an early stage or the environment can be sufficiently friendly and forgiving. There will consequently be cases of performance variability that remain unnoticed. From the point of view of a theory of “Human Error” they are, however, just as important as the cases where a mismatch is observed, and should therefore be accounted for by it.”
Consequently, I do not think that there can be a specific theory of “Human Error”, nor that there is any need for it, This is not because each error, as a “something” requiring an explanation, is unique, but precisely because it is not, i.e., because it is one out of several possible causes. Instead we should develop a theory of human action, including a theory of decision making, which may be used as a basis for explaining any observed mismatch A theory of action must include an account of performance variability, and by that also the cases of where “Human Error” is invoked as a cause.”
But more importantly and responsibly the review suggests a constructive way forward to try and bridge the gaps and “mismatches”.
“To err within a system is human: A proposed way forward
The concept of human error has reached a critical juncture. Whilst it continues to be used by researchers and practitioners worldwide, there are increasing questions regarding its utility, validity, and ultimately its relevance given the move towards the systems perspective. We suggest there are three camps existing within EHF (Shorrock, 2013): 1) a group that continues to use the term with ‘good intent’, arguing that we must continue to talk of error in order to learn from it; 2) a group who continues to use the term for convenience (i.e. when communicating in non-EHF arenas) but rejects the simplistic concept, instead focusing on wider organisational or systemic issues; and 3) a group who have abandoned the term, arguing that the concept lacks clarity and utility and its use is damaging. We predict that this third group will continue to grow. We acknowledge that the concept of error can have value from a psychological point of view, in describing behaviour that departs from an individual’s expectation and intention. It might also be considered proactively in system design, however interactionalist methods that focus on all types of performance, rather than errors or failures alone, provide analysts with a more nuanced view. Importantly, however, we have seen how the concept of human error has been misused and abused, particularly associated with an error-as-cause view, leading to unintended consequences including blame and inappropriate fixes”.
They end with a positive vision acknowledging the contributions of the early concepts and approaches and concentrate on achieving real insights and understanding of complex adaptive sociotechnical systems and not get hung up on simplistic heuristics and biases.
“Human error has helped advance our understanding of human behaviour and has provided us with a set of methods that continue to be used to this day. It remains, however, an elusive construct. Its scientific basis, and its use in practice, has been called into question. While its intuitive nature has no doubt assisted EHF to gain buy-in within various industries, its widespread use within and beyond the discipline has resulted in unintended consequences. A recognition that humans only operate as part of wider complex systems leads to the inevitable conclusion that we must move beyond a focus on individual error to systems failure to understand and optimise whole systems.”