Risky Business

When did you last make a mistake? Maybe you had an accident in the car, left a tap running and flooded the house, made a bad investment. How did that feel?

Life if full of risks. We try to engineer them or their effects out as much as possible: we wear seatbelts, lie our infants on their backs at bedtime, tolerate airport security and buy insurance. When something bad happens, even when it is potentially avoidable, we know that doesn’t mean the person making the mistake was necessarily irresponsible or reckless.

What about at work? When did you last make mistake at work? Have you missed a cancer on a  chest radiograph, caused bleeding with a biopsy needle or forgot to add an alert to a time-sensitive finding. Were you subject to an investigation or regulatory process? How did that feel? Did it feel different?

Medicine is a risky business. Sometimes error is avoidable, but some error is intrinsic to the operational practicalities of the delivery of modern healthcare. The missing of a small abnormality on a few slices of a CT scan containing thousands of images is a mode of error genesis that continues despite most radiologists being painfully aware of it. Mitigations to reduce the rate of occurrence (comfortable reporting workstations, absence of interruption, reduced workload and pressure to report, double reporting, perhaps artificial intelligence assistance) are neither infallible nor always operationally realistic. Double reporting halves capacity. While we design processes to reduce risk, it’s impossible to engineer error out completely and other models are needed. To make error productive, we learn from it where we can, but we must recognise that sometimes there is nothing to learn, or that the lessons are so repeated and familiar that it might surprise an independent observer that the error persists (‘never events’ still happen).

If risk and error are intrinsic to what we do in healthcare, why then do we seem to fear error so much? The language we use about medical error is replete with emotionally laden and sometimes pejorative terms: negligence, breach of duty, substandard, avoidable, gross failure. Is it any wonder then that the meaning healthcare professionals sometimes adduce to adverse event investigation outcomes is threat, personal censure and condemnation? The language frames the nature of the response: if negligence or substandard care has resulted in avoidable harm, there is an associated implication that the providers of that care were negligent or wilfully blind to it. Most healthcare professionals I know perceive themselves as striving to do their best for their patients, so this implication clashes with self-image, motivation and belief.

Fear of error is compounded by the manner in which error has historically been investigated (and how courts manage claims). Retrospective case review occurs when it appears something has gone wrong in a patient’s care and sometimes determines that an error was ‘avoidable’. Such review is inevitably biased by hindsight and frequently by a narrow focus on the individual error and its harm without contextualising this within the wider workload or operational pressures prevailing at the time the error was made. Not noticing a small pneumothorax after a lung biopsy might be due to carelessness, or it might be because the operator was called away suddenly to manage a massive haemoptysis in recovery on a previous patient. It’s easy to be wise after the event, to suggest a different course of action should have been taken, but again this jars with our lived experience of making sometimes high-stakes decisions in sometimes pressured situations with frequently incomplete information. More enlightened modern investigatorial processes understand this and are thankfully becoming increasingly commonplace in UK healthcare.

Too often we continue to perceive error as a personal failure, a marker of poor performance or incompetence, a point at which we could or should have done better. The individual identified at this point, when a latent failure becomes real, is often well placed to describe upstream failures and process violations that led to the error, and the culture that allowed these violations be become normalised. In addition to the personal cost, focussing on personal failure means this individual is marginalised, their view dismissed and their intelligence lost. Thinking of this individual as a ‘second victim’ instead, rather than as a perpetrator is helpful: patient and professional are both casualties. Such a view is by definition non-accusatory and is a neutral starting point for an inquisitorial assessment of why an error occurred.

Recognition that some error is unavoidable still allows for patients to be compensated when things go wrong. An organisation or individual may be liable for providing compensation even if they are not deemed responsible for the harm. The idea of liability as distinct from blame is familiar to us: it’s why we buy third party insurance for our cars. Some collisions are clearly due to negligent driving. Many are not, but we are nevertheless liable for the consequences. In the UK, healthcare organisations are liable for the care they provide and are insured for claims for harm. For a patient to access compensation, legal action (or the threat of it) is required which inevitably results in an assessment of blame, conflates liability with culpability and does nothing to promote a no-fault culture. The insurance is named ‘Clinical Negligence Scheme for Trusts’, explicitly reinforcing the unhelpful notion that compensatable error is de-facto negligence.

Even ultra-safe industries like aviation have ‘optimising violations’ (pilots refer to this as ‘flying in the grey’): there’s always a reason not to go flying. In healthcare we don’t get this choice: error is an inevitable consequence of the societal necessity for providing complicated healthcare to ill, frail people. The only way to avoid it is to not provide the care. We can only learn in an environment that is supportive when error occurs, understands that error is not a reflection of professional competence, embraces it as a potential opportunity to get better but does not punish. Without this our practice will become beleaguered and bunkered, shaped by the fear of censure rather than what is technically, practically and ethically the right thing to do.

Our regulators, legal system and investigatory processes have been slow to embrace the idea that some error is inevitable. They have much to learn from industries such as aviation. In the meantime, it remains hard to be content with the notion that an error in your practice is frequently merely a reflection that you work in a risky business.

(Images from: Drew T, Vo MLH & Wolfe JM. The invisible gorilla strikes again: Sustained inattentional blindness in expert observers. Psychol Sci. 2013 September ; 24(9): 1848–1853)