NHS workforce and the reality distortion field.

The process of designing the first Apple Macintosh computer in the early 1980s was an arduous one. The exacting demands of Apple co-founder Steve Jobs resulted in his employees and colleagues describing a ‘reality distortion field’ around him and the people who came into his orbit, within which the impossible became possible. Rectangles with rounded corners when the processor couldn’t draw a circle? No problem. A device with a footprint smaller than a phone book when everything else was three times this size? OK. Shave half a minute off an already streamlined boot process? Yeah, we can do that.

Jobs was able to bridge the gulf between expectation and reality by the clarity of his idea assisted by the sheer force of his personality, his drive, his obsession and a large dose of behaviour one might describe as bullying.

In today’s NHS we see a huge gulf between expectation and reality. Amongst other laudable aspirations NHS England [NHSE] expects to eliminate elective waits of over 65 weeks by March 2024 and increase diagnostic activity to 120% of pre-pandemic levels by April 2023. There will be improved cancer waiting times and outcomes, delivery of 50 million more GP appointments, upgraded maternity services and more, all delivered within a balanced budget.

And yet as I write, emergency departments are full to overflowing and secondary care is snarled up as social care cannot take discharges. High cost resources like theatres stand idle as hospitals grind to a halt. Primary care is drowning in demand. Much infrastructure is ageing. Estate is frequently tired, cramped and unfit for purpose. In this context, a reality distortion field with the metaphorical power of a black hole is required to make NHSE’s objectives seem even remotely achievable.

There are things that can be done: waste can be reduced and unnecessary bureaucracy eliminated; skill mix can be improved and workforce better deployed; estate can be upgraded flexibly to allow for new ways of working; services can be made more responsive to the needs of the people the NHS serves. Perhaps demand or public and political expectation can be managed. Maybe artificial intelligence or other technocratic solutions can finally deliver on their promise. We can refresh our NHS and make it comparable again with the best of our neighbouring nations.

To achieve all this requires money. This is necessary but insufficient. It also requires people.

Without a motivated, engaged, enthusiastic, driven workforce, recovering from the current crisis will be impossible. It’s the staff of the NHS and social care sector who identify the blockages and inefficiencies and create the solutions needed to improve at all levels: from district nursing team to quaternary hospital service, from clinic to Integrated Care Board. This is not a new concept: Kaizen methodology with continuous improvement driven by all staff is well established in business and healthcare. It is the staff who deliver.

Jobs recognised the importance of people in delivering his vision. He surrounded himself with people he described as his ‘A’ team. They achieved what they did because while he was a martinet character, difficult to work with, prone to bouts of anger, rudeness and extreme condescension he was also inspiring, he imbued loyalty and belief. People wanted to work for him, to deliver for him.

Given the strong vocational ethos in the NHS workforce, it should be easy to motivate its staff. But instead I perceive a disillusionment and learned helplessness that I have never known before. This is corrosive to initiative and problem solving. Motivating the workforce means paying people appropriately, recognising that pay and compensation have a salient effect on morale and on the recruitment of new colleagues and the retention of old ones. It means publishing a long overdue workforce strategy. It means listening, and understanding the daily frustrations that erode professionalism and vocational drive. It means appreciating that working in ageing buildings with ageing equipment will inevitably breed apathy. It means transformative investment.

But more than this the NHS needs a transformative vision, akin to that seen at its inception. This means having the bravery and honesty to start a public discourse on how to fund the NHS and social care long term: what we can (or choose to) afford as a country and what we cannot (or choose not to). It means confronting difficult policy decisions about cost-effectiveness and service rationing with public, professionals and industry. It means addressing both demand for- and supply of- healthcare. Everyone I know in the NHS recognises the fact that we cannot go on as we are spending more and more on increasingly marginal outcomes.

And this is where the reality distortion field can help: because with the development of a transformative vision and a clear commitment to transformative investment I believe the NHS’s staff will deliver the solutions required. It has happened before and can happen again. Even before the money flows, the idea that the government understands and is committed to action will empower the workforce. It will allow the distortion field to develop and the gulf between expectation and reality to be bridged. But until the vision is developed and the investment begins there will be no reality distortion in the NHS. Just a grim reality.

Where might the vision come from? It’s clear not from our current government who seem to only have a wish-list of near-future outcomes expedient to help with their prospects at the next general election. To me, the only option seems to be a long term collaborative effort across successive Parliaments and political ideologies and involving all public, private, patient and professional stakeholders to co-create it. Whether there is the political will, executive structure or inspiring leader to facilitate this remains to be seen. Steve Barclay is not Steve Jobs.

Parachutes, belief and intellectual curiosity

“There is no evidence that jumping out of a plane with a parachute improves outcome”

“If you go to PubMed, you will find no publications that breathing air is good for you”

“There’ll never be a trial, we are beyond that”

Have you ever heard these statements made when someone discusses the evidence about a particular new (or old) therapy? The statements might be true but are they useful? Do they advance an argument? What do they mean?

A paper from a Christmas British Medical Journal in 2018 found no evidence of benefit of parachute use when people jumping out of an aeroplane (which happened to be stationary at ground level) were randomised to wearing either a parachute or an empty North Face rucksack. This evidence built on a 2003 systematic review which found that there was no randomised evidence on the usefulness of parachutes for high altitude exits. Both these articles have a somewhat tongue-in-cheek style, but make the point that “…under exceptional circumstances, common sense must be applied when considering the potential risks and benefits of interventions”.

It is self evident that wearing a parachute when jumping out of a plane in flight, or of being in an atmosphere with enough air to breathe, is good for you. When people quote arguments about parachutes or air (or similar) in response to a query about a lack of evidence for a particular intervention, they are implying that the intervention they are discussing is similarly self evidently safe, effective or cost effective, that common sense must be applied. 

The issue is that the benefits of most medical interventions are clearly not in this category. To give some examples from my own field, it is not self evident that dosimetric methods will improve the outcomes of selective internal radiation therapy sufficiently to make a difference to trial outcomes, that endovascular intervention for acute deep vein thrombosis improves long term outcomes compared with anticoagulation, that for complex aneurysms endovascular aneurysm repair is better than open surgery or conservative management… I could go on.

And here we come to the crux of the matter, which is that such comments add nothing to a discussion about an intervention’s evidence base. Rather, their effect is to stifle debate into a confused silence. Whether this is done intentionally or out of embarrassment is irrelevant, the effect is the same: intellectual curiosity is suppressed and questioning is discouraged. This is the opposite of the Empiricism that underpins the whole of Western scientific thought. Before people asked questions, it was self evident that the Earth was flat, that it was the centre of the universe and that it was orbited by the sun. That was just common sense.

A strategy related to appeals to common sense is the weaponisation of the weight of collective opinion. Clinical trial design is dependent on equipoise, meaning clinicians do not know which of several options is better. Equipoise is dependent on opinion, and opinion is swayed by much more than evidence. Medical professionals are just as receptive to marketing, advertising, fashion and halo bias as anyone. Nihilistic statements denying a trial is possible (or even desirable) on the grounds that an intervention has become too popular or culturally embedded are only true if we allow them to be. The role of senior ‘key opinion leaders’ is critical here: they have a responsibility to openly question the status quo, to use their experience to identify and highlight the holes in the evidence, to point out the ‘elephant in the room’. But too often these leaders (supported in some cases by professional bodies and societies) become a mouthpiece for industry and vested interest, promoting dubious evidence, suppressing debate and inhibiting intellectual curiosity. There are notable examples of trials overcoming the hurdle of entrenched clinical practice and assessing deeply embedded cultural norms. This requires committed leaders who create a culture where doubt, equipoise and enquiry can flourish.

Given the rapid pace of technological development in modern healthcare, it is not unreasonable to have an opinion about an intervention that is not backed by the evidence of multiple congruent randomised controlled trials. But this opinion must be bounded by a realistic uncertainty. A better word for this state of mind is a reckoning. To reckon allows for doubt. Instead, when an opinion becomes a belief, doubt is squeezed out. ‘Can this be true?’ becomes ‘I want this to be true’, then ‘it is true’ and ultimately ‘it is self evidently true’. Belief becomes orthodoxy, and questioning becomes heresy and is actively (or passive aggressively) suppressed.

Karl Popper’s theory of empirical falsification states that a theory is only scientifically valid if it is falsifiable. In his book on assessing often incomplete espionage intelligence, David Omand (former head of the UK electronic intelligence, security and cyber agency, GCHQ) comments that the best theory is the one with least evidence against it. A powerful question therefore is not “what evidence do I need to demonstrate that this view of the world is right?” but its opposite: “what evidence would I need to demonstrate that this view of the world is wrong?”. Before the second Gulf War in 2002-3 an important question was whether Iraq had an ongoing chemical weapons programme. As we all know, no evidence was found (before or after the invasion). The theory with the least evidence against it is that Iraq had, indeed, destroyed its chemical weapons stockpile. More prosaically, that all swans are white is self evident until you observe a single black one. 

If someone is so sure that an intervention is self evidently effective, pointing out an experimental design to test this should be welcomed, not seen as a threat. But belief (as opposed to a reckoning) is tied up in identity, self worth and professional pride. What then does an impassioned advocate of a particular technique have to gain from an honest answer to the question “what evidence would it take for you to abandon this intervention as ineffective?” if that evidence is then produced.

Research is hard. Even before the tricky task of patient recruitment begins, a team with complimentary skills in trial design, statistics, decision making, patient involvement, data science and many more must be assembled. Funding and time must be identified. Colleagues must be persuaded that the research question is important enough to be prioritised amongst their other commitments. This process is time consuming, expensive and often results in failure, as my fruitless attempts at getting funding from the National Institute for Healthcare Research for studies on abdominal aortic aneurysm attest. But this is not say that we should not try. We are lucky in medicine that many of the research questions we face are solvable by the tools we have at our disposal if only we could deploy them rapidly and at scale. Unlike climate scientists we can design experiments to test our hypotheses. We do not have to rely on observational data alone.

The 2018 study on parachute use is often cited as a criticism of evidence based medicine. That a trial can produce such a bizarre result is extrapolated to infer that all trials are flawed (especially if they do not produce the desired result). My reading of the paper is that the authors have little sympathy for these arguments. After discussing the criticisms levelled at randomised trials they write with masterly understatement “It will be up to the reader to determine the relevance of these findings in the real world” and that the “…accurate interpretation [of a trial] requires more than a cursory reading of the abstract.”

As I wander around device manufacturers at medical conferences I wonder: what if more of the resource used to fund the glossy stands, baristas and masseuses were channelled into rigorous and independent research, then generating the evidence to support what we do would be so much easier. And I wonder why we tolerate a professional culture that so embraces orthodoxy, finds excuses to not undertake rigorous assessments of the new (and less new) interventions we undertake and is happy to allow glib statements about trial desirability, feasibility and generalisability, about parachutes and air, to go unchallenged.

Values, guidance, NICE and the ESVS.

This is a transcript of a 7 minute talk I was invited to give at the Cardiovascular and Interventional Society of Europe’s [CIRSE] annual conference in Barcelona, as part of a session on “Controversies in Standard Endovascular Aneurysm Repair [EVAR] within IFU” [indications for use].

This talk: “NICE guidelines best inform clinical practice”, was one side of a debate: my opponent’s title was “European Vascular Society [ESVS] guidelines should be the standard of care”.

If you have on-demand access to CIRSE2022 content, you can view a recording of the session here.

Barcelona. Spain. 13th September 2022. 15:00h

Thanks. My name is Chris Hammond, and I’m Clinical Director for radiology in Leeds. I was on the NICE AAA guideline development committee from 2015-2019.

I have no declarations, financial or otherwise. We’ll come onto that in a bit more detail later.

This talk is not going to be about data. I hope we are all familiar with the published evidence about AAA repair. No. This talk is about values. Specifically, the values that NICE brings to bear in its analysis and processes to create recommendations and why these values mean NICE guidelines best inform clinical practice. What are those values?

Rigour, diversity, context.

Let’s unpick those a little.

NICE’s is known for academic rigour. Before any development happens, the questions that need answering are clearly and precisely identified in a scoping exercise. A PICO question is created, the outcomes of interest defined, and the types of evidence we are prepared to accept are stipulated in advance. 

The scope and research questions are then published and sent out for consultation – another vital step.

After the technical teams have done their work, their results are referred explicitly back to the scope. Conclusions and recommendations unrelated to the scope are not allowed.

This process is transparent and documented and it means committee members cannot change their mind on the importance of a subject if they do not like the evidence eventually produced. 

It’s impossible to tell from the ESVS document what their guideline development process was. A few paragraphs at the beginning of the document are all we get. ESVS do not publish their scope, research questions, search strategies or results. How can we be assured therefore that their conclusions are not biased by hindsight, reinterpreting or de-emphasizing outcomes that are not expedient? 

We can’t.

For example, data on cost effectiveness and outcomes for people unsuitable for open repair are inconvenient for EVAR enthusiasts. I’ll let you decide the extent to which these data are highlighted in the ESVS document.

More, in failing to define the acceptable levels of evidence for specific questions ESVS ends up making recommendations based on weak data. Recommendations are made based on the European Society of Cardiology criteria which conflate evidence and opinion. Which is it? Evidence or opinion? 

Opinions may be widely held and still be wrong. The sun does not orbit the earth. An opinion formulated into a guideline gives the opinion illegitimate validity.

Finally, there is the rigour in dealing with potential conflicts of interest. These are the ESVS committee declarations – which I had to ask for. The NICE declarations are in the public domain on the NICE website. Financial conflicts of interest are not unexpected though one might argue that the extensive and financially substantial relationships with industry of some of the ESVS guideline authorship do raise an eyebrow. 

The question though is what to do about them. NICE has a written policy on how to deal with a conflict, including exclusion of an individual from part of the guidance development where a conflict may be substantial. This occurred during NICE’s guideline development.

The ESVS has no such policy. I know because I have asked to see it. Which makes one wonder: why collect the declarations in the first place.

How can we then be assured these conflicts of interest did not influence guideline development, consciously or subconsciously.

We can’t

What about diversity? 

This is the author list of the ESVS guideline. All 15 of the authors, all 13 of the document reviewers and all 10 of the guideline committee are medically qualified vascular specialists. They are likely to all have had similar training, attended similar conferences and educational events and have broadly similar perspectives. It’s a monoculture. 

Where are the patients in this? The ESVS asked for patient review of the plain English summaries it wrote to support its document, but patients were not involved in the development of scoping criteria, outcomes of importance or in the drafting of the guideline itself.

Where is the diversity of clinical opinion? Where are the care of the elderly specialists to provide a holistic view? Where is anaesthesia? Primary care? Nursing?

Where is the representation of the people who pay for vascular services:  infrastructure, salaries, devices? And who indirectly pay for all this, maybe for your meal out last night, for the cappuccino you’ve just drunk? Where is their perspective when they also have to fund the panoply of modern healthcare?

NICE committees have representation of all these groups, and their input into the development of the AAA guidance was pivotal.  The NICE guidance was very controversial, but the consistency of arguments advanced by diverse committee members with no professional vested interest was persuasive.

Finally, we come to context.

An understanding of the ethical and social context underpinning a guideline is essential.

We cannot divorce the treatments we offer from the societal context in which we operate. We live in a society which emphasises individual freedom and choice and are comfortable with some people having more choices than others, usually based on wealth. Does this apply equally in healthcare? In aneurysm care? What if offering expensive choices for aneurysm repair means we don’t spend money on social care, nursing homes, cataracts or claudicants.

To what extent should guidelines interfere with the doctor-patient relationship? Limit it or the choices on offer? What is the cost of clinical freedom and who bears it?

NICE makes very clear the social context in which it makes its recommendations. It takes a society-wide perspective, and its social values and principles are explicit. You can find them on the NICE website. Even if you don’t agree with its philosophical approach, you know what it is.

We don’t know any of this for the ESVS guideline. We don’t know how ESVS values choice over cost, the individual over the collective. Healthcare over health. This means that the ESVS guideline ends up being a technical document, written by technicians for technicians, devoid of context and wider social relevance.

The ESVS guideline is not an independent dispassionate analysis, and it never could be, because its development within an organisation so financially reliant on funding from the medical devices industry was not openly and transparently underpinned by NICE’s values of rigour, diversity and context. 

Rigour. Diversity. Context

That’s why NICE guidelines best inform clinical practice.

Thanks for your attention.


Human Psychology, Nobel Laureates and Radiology Demand Management

Demand management; responsible requesting; appropriate referring; clinical vetting. Call it what you like: managing the demand for medical imaging is a hot topic. When it’s cheaper, easier and apparently more objective to get a scan than to get a senior and holistic medical opinion the demand for imaging will only increase.

Whether demand management is a good or a bad thing depends on your point of view. When I was a registrar, the fellow told a story about a placement he had done in a large hospital in the United States. After vetting a sorry litany of poorly justified inpatient ultrasound requests by chucking a third of them in the bin (as was his normal NHS practice), he was called aside by the Senior Attending to be told in vituperative terms and with a healthy smattering of agricultural language, that he had cost the department $25,000 in one morning and to please cease and desist.

On the other hand, in the increasingly austere environment of United Kingdom healthcare, the catchy “supporting clinical teams to ensure diagnostic test requesting that maximises clinical value and resource utilisation” is an important component of effort to increase the productivity of the service. This remains true even if the inexorable increase in demand, driven by increasing hospital attendance, direct requesting from primary care and widening indications for complex imaging such as CT and MRI, is unavoidable, and even mandated.

What levers can we put in place to try to ensure responsible requesting? There are some lessons we can learn here from two Nobel Laureates: Daniel Kahneman and Richard Thaler. Kahneman won the Nobel Prize in Economics in 2002 for his work on Prospect Theory, and Thaler in 2017 for his work on behavioural economics. Their work on how people make decisions has implications for radiology requesting.

In his book ‘Thinking Fast and Slow’, Kahneman describes the multiple biases and cognitive traps that distort the way we think and form judgements. One of these is ‘base-rate neglect’ in which we make narrative judgements about individual cases without thinking about the statistical likelihood of that judgement being correct. One of the simplest examples of this in his book is the following:

A young woman is described as a ‘shy poetry lover’. Is she more likely to be a student of Chinese or of Business Administration? 

The base-rate (numbers of people who study the two subjects) tends to suggest the latter, and the fact that she is female and is a ‘shy poetry lover’ tells you nothing of objective relevance as to which subject she chose. But which subject jumped into your mind? Worse than this, even when we are made aware of the base-rate, we tend to ignore this information. We continue to do this even when we are also reminded of the tendency to neglect base-rate: you probably find, even now, you cannot quite shake the image of the young woman studying Chinese.

In a radiology requesting context, the base-rate might be the statistical likelihood of a specific diagnosis. It might also be the rates of requesting of an individual, department or Trust relative to relevant peers (‘over-‘ or ‘under-requesting’) or other averaged metrics. 

But what the base-rate neglect phenomenon tells us is that this information is irrelevant when a clinician forms a judgement about whether to request imaging. The referrer’s thought processes create a narrative image of representativeness (of a patient’s presentation and a likely diagnosis) which may be completely divorced from the statistical likelihood of that diagnosis – hence referrals with the irritating query: “please exclude…”. Similarly, colleagues who are informed of their imaging practice relative to peers are unlikely to weigh this information when making decisions about imaging a specific patient at the point of requesting. If our goal is behaviour change, the base-rate neglect phenomenon tells us it’s pointless to describe the base-rate, to use non-binding clinical decision rules describing the likelihood of a particular diagnosis or to spotlight systemic over-requesting relative to peers. This information simply will be neglected, often subconsciously. This is not how anyone, clinicians included, makes decisions.

Even for conscious thought processes there will always be reasons why an expert feels their judgement will outperform an algorithm (or a decision rule) despite evidence that they frequently do worse. Kahneman describes this as the illusion of skill though there is some debate about the extent of this illusion and about the added value of expertise. However, when perceptions of skill are intimately bound to doctors’ social role and idea of personal worth, it is singularly difficult for them to accept algorithmic decisions which undermine these perceptions and the utility of their subjective judgement.

What else might work? Binding decision rules (for example not being allowed to request an imaging test unless certain criteria are met) and strict clinical pathways can help though can be proscriptive and rarely result in less imaging.

It is here that the work of our other Nobel Laureate, Richard Thaler, might help. In his book ‘Nudge’ he describes how people can be encouraged to make better decisions by careful design of the systems within which those decisions are made: something he describes as ‘choice architecture’. A simple example is auto-enrolment in pension schemes: the design (architecture) of the choice offered (stay-in or opt-out) favours enrolment over an alternative way of presenting the choice such as asking people to enrol themselves (opt-in or opt-out). 

In radiology requesting, decision rules could default to a particular scan type in particular clinical scenarios; information could be presented about relative cost, complexity, patient discomfort or radiation dose; alternatives could be suggested including senior clinical review; imaging choices could be limited by requester seniority or prior imaging studies; duplicate requests could be flagged. None of this requires complex software logic development and much of this work has already been done (eg. The Royal College of Radiology iRefer resources) – the critical step is to embed these resources into the requesting system choice architecture at the point of imaging request. The referrer still has all options available to them but has to consciously decide to override a recommendation and consider the consequences of their choice.

Finally both Kahneman and Thaler emphasise the importance of feedback in affecting behaviour. In order to learn, feedback needs to be timely, personal and specific. That is why we learn quickly not to put our finger into a flame but find it difficult to reconcile our individual contribution to global warming. Although there are obvious difficulties with the lost art of imaging vetting by intimidation, sighing deeply while tearing a quaking junior doctor’s request card into small pieces certainly provided the opportunity for immediate feedback and learning, especially if accompanied by a patient explanation. Vetting undertaken remotely (spatially and temporally) means this feedback is diluted, making it much less likely that learning will occur and requesting behaviour will alter. If departmental processes and electronic systems can be designed to allow prompt feedback to the requester that an imaging request has been rejected (or at least needs more discussion) this is much more likely to shift behaviour and improve quality, even for an itinerant and ever changing requester workforce.

There are other ways in which radiology demand can be managed: imaging protocols (eg. follow up after cancer or surveillance imaging) can be revised, financial disincentives can be created to suppress imaging use, waiting lists can be reassessed and validated. Some of these methods are more acceptable clinically and ethically than others.

What behavioural psychology and decision science tell us is that in order to alter requesting behaviour and culture, nudges, feedback and narrative story are more likely to get results than generic exhortations to reduce imaging use or to consider base-rates and statistical probability.

There are simple wins and the IT systems facilitating these nudges and feedback need not be complex.

Registry Data and the Emperor’s New Clothes

Registries. They’re a big thing in interventional radiology. Go to a conference and you’ll see multiple presentations describing a new device or technique as ‘safe and effective’ on the basis of ‘analysis of prospectively collected data’. National organisations (eg. the Healthcare Quality Improvement Partnership [HQIP] and the National Institute for Health and Care Excellence), professional societies (like the British Society for Interventional Radiology) and the medical device industry promote them, often enthusiastically.

The IDEAL collaboration is an organisation dedicated to quality improvement in research into surgery, interventional procedures and devices. It has recently updated its comprehensive framework for the evaluation of surgical and device based therapeutic interventions. The value of comprehensive data collection within registries is emphasised in this framework at all stages of development, from translational research to post-market surveillance.

Baroness Cumberledge’s report into failures in long term monitoring of new devices, techniques and drugs identified a lack of vigilant long-term monitoring as contributing to a system that is not safe enough for those being treated using these innovations. She recommended that a central database be created for implanted devices for research and audit into their long term outcomes. 

This is all eminently sensible. Registries, when properly designed and funded and with a clear purpose and goal are powerful tools in generating information about the interventions we perform. But I feel very uneasy about many registries because they often have unclear purpose, are poorly designed and are inadequately funded. At best they create data without information. At worst they cause harm by obscuring reality or suppressing more appropriate forms of assessment.

A clear understanding of the purpose of a registry is crucial to its design. Registries work best as tools to assess safety. In a crowded and expensive healthcare economy, this is an insufficient metric by which to judge a new procedure or device. Evidence of effectiveness relative to alternatives is crucial. If the purpose of a registry is to make some assessment of effectiveness, its design needs to reflect this.

The gold standard tool for assessing effectiveness is the randomised controlled trial [RCT]. These are expensive, time-consuming, and complex to set up and coordinate. As an alternative, a registry recruiting on the basis of a specific diagnosis (equivalent to RCT inclusion criteria) is ethically simpler and frequently cheaper to instigate. While still subject to selection bias, a registry recruiting on this basis can provide data on the relative effectiveness of the various interventions (or no intervention) offered to patients with that diagnosis. The registry data supports shared decision making by providing at least some data about all the options available. 

Unfortunately, most current UK and international interventional registries use the undertaking of the intervention (rather than the patient’s diagnosis) as the criterion for entry. The lack of data collection about patients who are in some way unsuitable for the intervention or opt for an alternative (such as conservative management) introduces insurmountable inclusion bias and prevents the reporting of effectiveness and cost-effectiveness compared with alternatives. The alternatives are simply ignored (or assumed to be inferior) and safety is blithely equated with effectiveness without justification or explanation. Such registries are philosophically anchored to the interests of the clinician (interested in the intervention) rather than to those of the patient (with an interest in their disease). They are useless for shared decision making. 

This philosophical anchoring is also evident in choices about registry outcome measures which are frequently those most easy to collect rather than those which matter most to patients: an perfect example of the McNamara (quantitative) fallacy. How often are patients involved in registry design at the outset? How often are outcome metrics relevant to them included, rather than surrogate endpoints of importance to clinicians and device manufacturers?

Even registries where the ambition is limited to post-intervention safety assessment or outcome prediction, and where appropriate endpoints are chosen, are frequently limited by methodological flaws. Lack of adequate statistical planning at the outset and collection of multiple baseline variables without consideration of the number of outcome events needed to allow modelling, risks overfitting and shrinkage – fundamental statistical errors.

Systematic inclusion of ‘all comers’ is rare, but failure to include all patients undergoing a procedure introduces ascertainment bias. Global registries often recruit apparently impressive numbers of patients, but scratch the surface and you find rates of recruitment that suggest a majority of patients were excluded. Why? Why include one intervention or patient but not another? Such recruitment problems also affect RCTs, resulting in criticisms about ‘generalisability’ or real world relevance, but its uncommon to see such criticism levelled at registry data, especially when it supports pre-existing beliefs, procedural enthusiasm, or endorses a product marketing agenda.

Finally there is the issue of funding. Whether the burden of funding and transacting post market surveillance should fall primarily onto professional bodies, the government or on the medical device companies that profit from the sale of their products is a subject for legitimate debate but in the meantime, registry funding rarely includes the provision for systematic longitudinal collation of long-term outcome data from all registrants. Pressured clinicians and nursing staff cannot prioritise data collection absent the time or funding to do this. Rather the assumption is (for example) that absence of notification of adverse outcome automatically represents a positive. Registry long term outcome data is therefore frequently inadequate. While potential solutions such as linkages to routinely collected datasets and other ‘big data’ initiatives are attractive, these data are often generic and rarely patient focussed. The information governance and privacy obstacles to linkages of this sensitive information are substantial.

Where does this depressing analysis leave us?

Innovative modern trial methodologies (such as cluster, preference, stepped wedge, trial-within-cohort or adaptive trials) provide affordable, robust, pragmatic and scalable alternative options for the evaluation of novel interventions and are deliverable within an NHS environment, though registries are still likely to have an important role to play. HQIP’s ‘Proposal for a Medical Device Registry’ defines key principles for registry development including patient and clinician inclusivity and ease of routine data collection using electronic systems. When these principles are adhered to, where registries are conceived and designed around a predefined specific hypothesis or purpose, where they are based on appropriate statistical methodology with relevant outcome measures, are coordinated by staff with the necessary skillsets to manage site, funding and regulatory aspects and are budgeted to ensure successful data collection and analysis, then they can be powerful sources of information about practice and novel technologies. This is a high bar but is achievable as the use of registry data during the COVID pandemic has highlighted. Much effort is being expended on key national registries (such as the National Vascular Registry) to try to improve the quality and comprehensiveness of the data collected and create links to other datasets.

But where these ambitions are not achieved we must remain highly skeptical about any evidence registry data purports to present. Fundamentally, unclear registry purpose, poor design and inadequate funding will guarantee both garbage in and garbage out.

Registry data is everywhere. Like the emperor’s new clothes, is it something you accept at face value, uncritically, because everyone else does? Do you dismiss the implications of registry design if the data interpretation matches your prejudice? Instead perhaps, next time you read a paper reporting registry data or are at a conference listening to a presentation about a ‘single arm trial’, be like the child in the story and puncture the fallacy. Ask whether there is any meaningful information left once the biases inherent in the design are stripped away.

Risky Business

When did you last make a mistake? Maybe you had an accident in the car, left a tap running and flooded the house, made a bad investment. How did that feel?

Life if full of risks. We try to engineer them or their effects out as much as possible: we wear seatbelts, lie our infants on their backs at bedtime, tolerate airport security and buy insurance. When something bad happens, even when it is potentially avoidable, we know that doesn’t mean the person making the mistake was necessarily irresponsible or reckless.

What about at work? When did you last make mistake at work? Have you missed a cancer on a  chest radiograph, caused bleeding with a biopsy needle or forgot to add an alert to a time-sensitive finding. Were you subject to an investigation or regulatory process? How did that feel? Did it feel different?

Medicine is a risky business. Sometimes error is avoidable, but some error is intrinsic to the operational practicalities of the delivery of modern healthcare. The missing of a small abnormality on a few slices of a CT scan containing thousands of images is a mode of error genesis that continues despite most radiologists being painfully aware of it. Mitigations to reduce the rate of occurrence (comfortable reporting workstations, absence of interruption, reduced workload and pressure to report, double reporting, perhaps artificial intelligence assistance) are neither infallible nor always operationally realistic. Double reporting halves capacity. While we design processes to reduce risk, it’s impossible to engineer error out completely and other models are needed. To make error productive, we learn from it where we can, but we must recognise that sometimes there is nothing to learn, or that the lessons are so repeated and familiar that it might surprise an independent observer that the error persists (‘never events’ still happen).

If risk and error are intrinsic to what we do in healthcare, why then do we seem to fear error so much? The language we use about medical error is replete with emotionally laden and sometimes pejorative terms: negligence, breach of duty, substandard, avoidable, gross failure. Is it any wonder then that the meaning healthcare professionals sometimes adduce to adverse event investigation outcomes is threat, personal censure and condemnation? The language frames the nature of the response: if negligence or substandard care has resulted in avoidable harm, there is an associated implication that the providers of that care were negligent or wilfully blind to it. Most healthcare professionals I know perceive themselves as striving to do their best for their patients, so this implication clashes with self-image, motivation and belief.

Fear of error is compounded by the manner in which error has historically been investigated (and how courts manage claims). Retrospective case review occurs when it appears something has gone wrong in a patient’s care and sometimes determines that an error was ‘avoidable’. Such review is inevitably biased by hindsight and frequently by a narrow focus on the individual error and its harm without contextualising this within the wider workload or operational pressures prevailing at the time the error was made. Not noticing a small pneumothorax after a lung biopsy might be due to carelessness, or it might be because the operator was called away suddenly to manage a massive haemoptysis in recovery on a previous patient. It’s easy to be wise after the event, to suggest a different course of action should have been taken, but again this jars with our lived experience of making sometimes high-stakes decisions in sometimes pressured situations with frequently incomplete information. More enlightened modern investigatorial processes understand this and are thankfully becoming increasingly commonplace in UK healthcare.

Too often we continue to perceive error as a personal failure, a marker of poor performance or incompetence, a point at which we could or should have done better. The individual identified at this point, when a latent failure becomes real, is often well placed to describe upstream failures and process violations that led to the error, and the culture that allowed these violations be become normalised. In addition to the personal cost, focussing on personal failure means this individual is marginalised, their view dismissed and their intelligence lost. Thinking of this individual as a ‘second victim’ instead, rather than as a perpetrator is helpful: patient and professional are both casualties. Such a view is by definition non-accusatory and is a neutral starting point for an inquisitorial assessment of why an error occurred.

Recognition that some error is unavoidable still allows for patients to be compensated when things go wrong. An organisation or individual may be liable for providing compensation even if they are not deemed responsible for the harm. The idea of liability as distinct from blame is familiar to us: it’s why we buy third party insurance for our cars. Some collisions are clearly due to negligent driving. Many are not, but we are nevertheless liable for the consequences. In the UK, healthcare organisations are liable for the care they provide and are insured for claims for harm. For a patient to access compensation, legal action (or the threat of it) is required which inevitably results in an assessment of blame, conflates liability with culpability and does nothing to promote a no-fault culture. The insurance is named ‘Clinical Negligence Scheme for Trusts’, explicitly reinforcing the unhelpful notion that compensatable error is de-facto negligence.

Even ultra-safe industries like aviation have ‘optimising violations’ (pilots refer to this as ‘flying in the grey’): there’s always a reason not to go flying. In healthcare we don’t get this choice: error is an inevitable consequence of the societal necessity for providing complicated healthcare to ill, frail people. The only way to avoid it is to not provide the care. We can only learn in an environment that is supportive when error occurs, understands that error is not a reflection of professional competence, embraces it as a potential opportunity to get better but does not punish. Without this our practice will become beleaguered and bunkered, shaped by the fear of censure rather than what is technically, practically and ethically the right thing to do.

Our regulators, legal system and investigatory processes have been slow to embrace the idea that some error is inevitable. They have much to learn from industries such as aviation. In the meantime, it remains hard to be content with the notion that an error in your practice is frequently merely a reflection that you work in a risky business.

(Images from: Drew T, Vo MLH & Wolfe JM. The invisible gorilla strikes again: Sustained inattentional blindness in expert observers. Psychol Sci. 2013 September ; 24(9): 1848–1853)

Moral hazard in a failing service

I go to see a woman on the ward to tell her that, again, her procedure is cancelled. I see, written in the resigned expression on her face, the effort and emotional energy it has taken to get herself here: arrangements she made about the care of her household, relatives providing transport from her home over 70 miles away and now unexpectedly called to pick her up. A day waiting, the anxiety building as a 9am appointment became 10, then lunchtime, then afternoon. The tedious arrangements to be necessarily repeated: COVID swabs, blood tests, anticoagulation bridging. All wasted.

She smiles at me as I apologise. She is kind, rather than angry, understanding rather than belligerent. And yet she has every right to be furious. This is, after all, the second time this has happened. And she knows as well as I do that my attempts at assurance that we will prioritise her bed for the next appointment she is offered are as empty and meaningless as they were last time she heard them.

Such stories are the everyday reality for patients and clinicians within the NHS, repeated thousands of times a day across the country, each one a small quantum of misery. At least my patient got an appointment. Some don’t. Ask anyone with a condition that is not life threatening or somehow subject to media scrutiny or an arbitrary governmental target about their access to planned hospital care and you will likely get a snort of derision or a sob of hopelessness. Benign gynaecological conditions (for example) can be debilitating but frequently slip to the bottom of the priority list, suffered in private silence, without advocates able to leverage the rhetorical and emotional weight of a cancer diagnosis.

This is not all COVID related. Yes, COVID has made things worse but really all the pandemic has done is cruelly reveal the structural inadequacies that we have been working around in the NHS for years and years. ‘Winter pressures’ have reliably and predictably closed planned care services even if it took until winter 2017 for the NHS to officially recognise this and cancel all elective surgery for weeks. Estate is often old and not fit for purpose. Departmental and ward geography does not allow for the patient separation and flow demanded by modern healthcare. Staffing rotas are stretched to the limit with no redundancy for absence. Old infrastructure and equipment requires inefficient workarounds. Increasing effort goes into Byzantine plans for ‘service continuity’ to deal with operational risks, while the fundamentals remain unaddressed.

Efficiency requires investment. You cannot move from a production line using humans to one using robots without investing in the robots to do the work and the skilled people to run them. You cannot move from an inpatient to an outpatient model of care for a condition without investing in the infrastructure and people to oversee that pathway. You cannot manage planned and unplanned care via a single resource without adversely affecting the efficiency of both. You cannot expect a hugely expensive operating theatre or interventional radiology suite to function productively if the personnel tasked with running it spend a significant proportion of their day juggling cases and staff in an (often vain) attempt to get at least a few patients ready and through the department. Modern healthcare requires many systems to function optimally (or at least adequately) before anything can be done. Expensive resources frequently lie idle when a failure in one process results in the entire patient pathway collapsing.

The moral hazard encountered by people working in this creaking system is huge. How can we feel proud of the service we offer when failure is a daily occurrence? When we, the patient facing front-of-house, are routinely embarrassed by – or apologetic for – the system which we represent. We can retreat into the daily small victories: a patient treated well, with compassion, leaving satisfied; an emergency expertly, efficiently and speedily dealt with; teamwork. But these small victories seem to be less and less consoling as the failures mount. Eventually staff (people after all) lose belief, drive and motivation. Disillusionment breeds diffidence, apathy and disengagement. The service, reliant on motivated and culturally engaged teams, becomes less safe, less caring, less personal and even more inefficient as staff are no longer inclined to work occasionally over and above their job planned activity. A bureaucracy of resource management develops and teams become splintered. Process replaces culture and a credentialed skill-mix replaces trusted professional relationships.

The moral hazard is compounded by the seemingly wilful blindness of our political masters, the holders of the purse strings, to comprehend the size of the problem. Absent any real prospect of improvement, we learn to accept the status-quo, the cancellations, the delay, the waiting lists. And our patients accept this too: how else does one explain their weary stoicism. Meanwhile our leaders cajole us to be more efficient, to embrace new ways of working, to do a lot more with a bit more money. It remains politically expedient to disguise a few percent increase in healthcare revenue spending as ‘record investment’ but I argue that most people working at all levels in the NHS recognise the need for transformative generational investment on a level not seen since the inception of the service. Such investment requires money and money means taxation.

More than that, there needs to be the political bravery to open a considered debate about what we mean by healthcare, where our money is most efficiently targeted and what we, as a society can (or are willing to) afford in amongst other priorities for governmental spending. Shiny new hospitals providing state-of-the-art treatment may make good PR but are meaningless without functional well funded primary care. Investment in complex clinical technologies will not improve our nation’s health if the social determinants of this (poverty, smoking, diet, housing, education, joblessness, social exclusion) remain unaddressed. Such a discourse seems anathema to our current politics with its emphasis on the individual, on technocratic solutions and on the empty promise of being able to have everything we want at minimal personal, environmental or societal cost.

Until our leaders start this debate, and until we, as members of society, understand the arguments and elect politicians to enact its conclusions, ‘our NHS’ will continue to provide sometimes substandard and inefficient care in a service defined by its own introspection rather than by the needs of the community it should serve. Our healthcare metrics will continue to lag behind those of comparator nations. And I will continue to find myself, late in the afternoon, apologising to women and men for the inconvenience and anxiety as I speak to them about cancelling their procedure, hating myself for it but helpless to offer any solution or solace.

Decisions, QALYs and the value of a life

Here’s a well known thought experiment:

A runaway train is on course to collide with and kill five people who are stuck at a crossing down the track. There is a railway point and you can pull a lever to reroute the train to a siding track, bypassing the people stuck at the crossing but killing 2 siding workers.

What would you do? What is the ethical thing to do? Why? What if one of the siding workers was related to you? What if the people stuck at the crossing were convicted murderers on their way to prison? What if the people on the crossing were not killed but permanently maimed?

Have a think about it before reading on.

Unless you answered that you did not accept the situation at face value (like Captain Kirk and the Kobayashi Maru simulation), or refused to choose, you will have made some judgements about the relative value of the choices on offer and perhaps the relative value of the lives at risk. You are not alone in this: in a 2018 Nature paper, nearly 40 million people from 233 countries were prepared to make similar choices. On average there were preferences to save the young over the old, the many over the few and the lawful over the unlawful, though with some interesting regional and cultural variations.

Making value judgements about people in a thought experiment is one thing, but making them in the real world with impacts on real people and their lives is another. Ascribing value to a persons life has grim historical and moral connotations. If someone is deemed somehow less valuable than someone else there is a risk that this is used to justify stigmatisation, discrimination, persecution and even genocide. We therefore need to be extremely careful about the moral context in which such judgements are made and the language we use to discuss them. Human rights, justice and the fundamental equivalence of the life and interests of different people must be central. 

Decisions which affect the health, livelihoods and welfare of citizens are (and need to be) made all the time. In some cases decisions affect length or quality of life, or liberty.  Decision making during the pandemic (whether locking down, opening up, isolation, mask wearing or travel restriction) is a potent recent example. Few people would argue that no decisions were necessary even if they may disagree with the details of some (or all) of the actual decisions made.

But if everyone’s life and interests are equivalent, how do we avoid becoming paralysed when faced with choices which inevitably will have (sometimes significant) consequences for different individuals? We do this by understanding that the values we ascribe to the people affected by a decision are not absolute measures of their worth, but merely tokens which allow us to undertake some accounting. If the process by which we allocate these tokens is transparent, just and humane then their use to inform a decision is morally defensible. Choosing to switch the points because this results in the least worst outcome on average is morally very different from choosing to switch them because you have a seething hatred of railway engineers.

What tokens can we use in healthcare to help us make decisions?

There have been attempts to provide a quantitative framework for measuring health. The most commonly recognised token of health is the Quality Adjusted Life Year (QALY) though there are others (eg. Disability Adjusted Life Years [DALY]). One QALY is a year lived in full health. A year lived in in less than full health results in less than one QALY, as does less than one year lived in full health. How much we scale a QALY for less than full health is determined by studies asking members of the public to imagine themselves ill or disabled and then enquiring (for example) how much length of life they’d trade to be restored to full health (time trade-off) or what risk of death they’d accept for a hazardous cure (standard gamble). 

The QALY is a crude and clumsy tool. It has been criticised for relying on functional descriptions of health states (like pain, mobility and self care) rather than manifestations of human thriving (stability, attachment, autonomy, enjoyment), for systematically biasing against the elderly or the disabled and for failing to take into account that health gains to the already healthy may be valued less than health gains to the already unhealthy (prospect theory). The scalar quantities contributing to a QALY (‘utilities’) reflect the perceptions of those surveyed during QALY development, validation and revalidation. These perceptions may be clouded by fear or ignorance and may have little relation to the real experiences of people living with a health impairment or handicap. Some have argued that QALYs have poor internal validity and are therefore a spurious measure.

These are important, though arguable, technical criticisms and to some extent explain the marked international variation in the use of the QALY: they are used in the UK and some Commonwealth countries, but have been rejected as a basis for health technology assessment in the US and Germany. And yet, decisions need to be made. If not QALYs then what else?

But the most emotionally charged criticism of QALYs is that they somehow inherently rank people’s value according to how healthy they are or that the health of people who gain fewer QALYs from an intervention is somehow worth less than the health of those who gain more. This is a misunderstanding. QALYs (like the assessments you made of the lives at risk from the runaway train) are accounting tokens. When fairly, justly and transparently allocated (and technical criticisms might be important here) they merely allow a quantitative assessment of outcome. The rationale underlying QALY assessment is explicit that the value of a QALY is the same no matter who it accrues to: there is no moral component in the calculation. And nor is there the requirement that efficiency of QALY allocation be the sole (or even most important) driver of decision making.

A QALY calculation is fundamentally contingent on the interaction of the intervention with the people being intervened on. Someone’s capacity to benefit (which is what QALYs measure) depends not just on their characteristics but also those of the intervention. Absolute ranking of QALYs as an empirical assessment of someone’s ‘value’ based on their health is therefore meaningless: a different intervention on the same set of people can result in a totally different estimate of outcome.

Consider if rather than five people on the crossing there was only one (and still two siding workers). A pure utilitarian consequentialist would switch from ploughing the train into the siding to smashing it into the crossing. But this doesn’t mean she has suddenly changed her mind about the value of the lives of the people involved, merely that the situation and therefore the most efficient outcome, has changed.

QALYs don’t ascribe a value to someone’s life. They are accounting tokens, providing a (perhaps flawed) quantitative estimate of health outcome in a specific circumstance – usually that of evaluation of an intervention relative to an alternative in an identifiable group of people. This is not to say that some people might not be harmed by a decision based on a QALY assessment, but that, of itself, does not make the decision unfair or unjust.

Alongside utilitarian efficiency and QALYs, egalitarian considerations of fairness and equity, distributional factors, affordability, and political priorities may (and often do) feed into the decisions that are ultimately made. 

Consent and shared decisions

I go to consent a man for an angioplasty. He had a full length stenting of his SFA twelve months ago. This was difficult, required several attempts over a few days and multiple punctures into the artery at the groin, behind the knee and at the ankle. The procedure was done to treat an ulcer and this has slowly improved but is still not fully healed. He tells me that his prior procedures were dreadful: painful, long and frightening. He is clearly anxious about undergoing another one, and yet he is here, in recovery, ready to go through it all again.

He is here because a surveillance ultrasound has shown a stenosis in the stents. The protocol kicked in, an angioplasty was booked, preassessment happened, arrangements were made. We go through the consent together. He signs the form, we wheel him into the angio suite and off we go.

Thinking about my patient’s interaction with this small part of his healthcare, what strikes me is the degree to which the process disenfranchises him: his role seems passive in the face of an inevitable and gathering momentum that brings him into surveillance and from there to the angiography suite. Apart from the uncertainties involved (Does he need an angioplasty at all? What happens if nothing is done?), what opportunity has there been to explore his hopes and expectations for this treatment? Why have we called him here? What does he want? What do we want? Are those the same things? Even the language used (‘consenting’) creates a sense of passivity. It implies something done to the patient, not done with them.

Healthcare decision making is becoming increasingly complicated due to an ageing and co-morbid population, the management of imaging findings with an unknown natural history, and a wealth of competing technological treatment options with differing profiles of risk and benefit over time and for different patient cohorts. On top of this are the substantial uncertainties in the evidence about each of these. Confronted with these changes it seems essential that patients are given the space to consider their options and (perhaps more importantly) to reflect on their broader preferences and how they might weigh their treatment options in the context of these. Taking patients through this is delicate, time consuming and sometimes uncomfortable for everyone, especially if discussions broach the nature of risk or the inevitability of a finite lifespan. 

GMC guidance on consent highlights information sharing but also emphasises the importance of a dialogue and (crucially) on finding out what matters to patients, their ‘…wishes and fears… …and their needs, values and priorities…’. The Montgomery Ruling has created legal precedent effectively requiring such a dialogue. Clinicians are notoriously poor at doing this and patient surveys often identify information gaps, misunderstandings and assumptions (by both patients and clinicians) about motivations and drivers for care. In retrospect, patients not infrequently regret their decisions.

Additionally, as treatment complexity increases, an individual’s healthcare becomes distributed across teams of specialist technicians, each with their own narrow area of expertise.  Treatment pathways become protocolised and a patient’s relationship changes from being with an individual to being with an organisation. When a patient sees different professionals at every stage of their healthcare journey there is little opportunity for the development of the rapport and trust which gives them freedom to state their preferences and question the assumptions driving their management. Every individual interaction adds to the momentum toward a predetermined outcome that they might not want. Paradoxically, as the decision making complexity increases, the opportunity for dialogue exploring the decision making declines.

What might we do differently? 

Decision support tools are being investigated to help patients navigate the complexities of high-risk surgical decision making. In the meantime, I am struck by a personal analogy about a complicated decision I recently had to make. Many of the conceptual issues when making an investment decision are similar to those made by patients about healthcare: the trade off of short- and long-term outcomes; the nature and magnitude of uncertainty in these; the reliance on an expert to interpret complicated data and communicate them in a meaningful way and the willingness to delegate some of the decision making. 

But what was very different in my decision making was what my financial advisor and I did before we began to talk about particular investments. We spent a long time assessing my attitude to risk, my view on ethical or green investments, the timeframe over which the investment was required and the context of the investment within my other priorities. Only then did we move on to discuss potential investment vehicles. I ended up agreeing a recommendation: I’m not sure I fully understand the complete details. I certainly couldn’t quote the numbers. But I am sure I trust my advisor’s recommendation, knowing that he knows what my perspectives are.

There are similar (though limited) models in healthcare. The UK Resuscitation Council’s Recommended Summary Plan for Emergency Care and Treatment (ReSPECT) and the US Physician Orders for Life Sustaining Treatments (POLST) processes encourage patients and clinicians to discuss personal priorities for care and options for management in the event of an emergency toward the end of life (frequently, though not exclusively, meaning CPR in the event of cardiac arrest or other ceilings of care such as transfer to ITU). Having these conversations is difficult, but their importance lies not only in the outcome, but also in the process itself which explicitly demonstrates that patients retain agency and that healthcare organisations recognise and value this.

Can we adopt similar models more broadly in consent? Perhaps. We need to develop practical techniques and questions which allow wider perspectives to be explored with patients. However the fundamental prerequisite is that we be curious about what our patients want and on their terms, not on ours. We need to offer them time, space and support to help them express these wants, and we need to listen, understand and heed them before we negotiate a decision. 

A focus on consent as a technical exercise in the sharing of information is a narrow and meagre understanding. Ideally, consent is an exploration of the patient’s values and priorities, and then a contextualising of the treatment choices on offer in the light of these priorities. Unfortunately this is often logistically impossible.

My patient’s angioplasty was quick, technically successful and apparently pain free. I booked him back into surveillance. Is this what he wanted? I don’t know. I’m sorry to say, I didn’t ask.

Resilience

I’ve always thought of myself as stoical and unflappable. I’ve never really engaged with resilience programmes, but I finished rewatching the first series of Cardiac Arrest. The final episode is harrowing and at the end, alone in the darkened living room, the rest of the family asleep, I found myself weeping, uncontrollably, silent tears streaming down my cheeks, with images of my career passing through my mind. Patients resurrected in thought. 

My first memorable patient: a man with post-ITU polyneuropathy and an infectious good mood despite his desperate and lethal disease, his face laughing from his hospital bed across the intervening two-and-a-half decades. A French man in coronary care in crashing cardiac failure who, by the time I arrived, had already received over the maximum treatment I knew how to give and whose family watched me as, paralysed with fear, I did nothing and he slid towards death; a Christmas morning staring fascinated into the skull of a young man who, chased by police, had crashed the stolen car he was driving into a fence and blasted the brain from his head, leaving the tattered remains of his brainstem and optic nerves the sole recognisable structures in his cranial cavity; the poor woman with a posterior fossa haemorrhage who spent all day wailing in distress, disabled and in pain and us so inured to it that we, privately and with a savagery that now makes me gasp, bastardised her name to something rhyming cruelly, reflecting her plight; the man so badly beaten his head looked like a baggy purple football twice the size it should have been whose shocked, distressed and bewildered family insisted it was racism underlying the suggestion we withdraw treatment; the student with brittle asthma and a silent chest found collapsed in his hall of residence who I got breathing again in the emergency department, but too late for his oxygen starved brain; a young mother bravely accepting the fate the newly diagnosed glioblastoma had made for her, knowing she would never see her child’s 10th birthday. The woman who died of rhabdomyolysis after an elective intervention about which from the outset I’d had doubts about the wisdom of performing. Every pointless clamshell thoracotomy I’ve ever seen, performed by overexcited surgical registrars, mutilating the freshly dead in a desperate bid for heroism or fear of the inevitable. The middle aged man in replica strip I tried to resuscitate out-of-hospital, collapsed on an adjacent 5-a-side pitch in cardiac arrest, accompanying him in the ambulance and standing in resus in my football kit, his regurgitated stomach contents drying on my arms and legs as the resuscitation failed and the hospital moved on. A prisoner, cursing me and trying to punch me with his good arm while I struggled to fix the fistula I’d just ruptured on his other, flailing and thrashing as I tried to stop him exsanguinating into his axilla. A woman with a high spinal injury, newly quadriplegic, mute from the tracheostomy and intercostal paralysis, who I heard on the radio some years later, confident, empowered and happy, describing her campaign for disability rights.

The blood. Blood everywhere, on my gloves and gown, behind my eyes and in my head. The complexions only knowable by experience: the grey putty of haemorrhagic shock, or the waxy sheen of the profoundly brain injured. And the hours: so many hours. Hours and hours and hours. The late nights and missed dates. The unique quiet cacophony of a ward in the small hours. Footsteps, whispers, snores, electronic pings, cries. The smell of chlorhexidine, diathermy, toast and butter. Death and life. Love and anger.  Futility. The bravery, joy, compassion and cynicism. Fear, pain, frustration, soul-sapping fatigue, paperwork, exams, alcohol, promiscuity. Inspiring, wise, dedicated role models and an occasional self-interested charlatan. Trainers who don’t train and superb teaching at 3am from someone as exhausted as you. Annual assessments where you complain about your lack of opportunity to practice and they mark you down for exactly that. Mostly caring, helpful, supportive, colleagues and a few memorable bullies. The immensely satisfying collaboration and the thrill of making someone well. The faith and trust placed in me and in what I represent by the vulnerable and sick.

A boy, 13, now barely recognisable, moonfaced and bald from cycles of PABLOE and CHLVPP deciding he wanted to practice medicine.

These stories are not unique, every healthcare professional has a collection, but these particular ones are mine, woven into my personality without apology. They contribute to who I am, for better or worse.

Resilience? Medicine is rollercoaster which never stops. Just jump on and jump off. Sometimes the ride is fun. Sometimes it’s appalling.

I wipe the tears away and begin to type. 

Am I resilient? Are you?