In praise of doing nothing

“Don’t just do something, stand there”

A man in his early 80’s is diagnosed with a large hepatocellular carcinoma. This was detected incidentally on an abdominal ultrasound performed for an episode of renal colic. The liver lesion is completely asymptomatic. The ultrasound prompted an MRI of the liver, then a staging CT, a visit to the liver surgeon and ultimately a plan for hemihepatectomy. As there will not be enough residual liver after this he is referred to me for portal vein embolisation to get the future remnant liver to grow before resection.

Modern medicine is remarkably safe. Portal vein embolisation carries negligible risk. Major liver resection in octogenarians is associated with a perioperative mortality of about 5%. There have been dramatic improvements in the safety of many invasive procedures facilitated by improvements in anaesthetic care, pre- and post-operative medical management and rehabilitation, nutrition and the advent of minimally invasive surgery and imaging guided techniques. This is unarguable progress.

However, when procedures are so safe that there is very little downside to undertaking them, we create a set of ethical and evidential questions which we are poorly equipped to answer. These relate to long term outcomes (and predicting these on a patient-by-patient basis) and the appropriate use of resource. As questions about ‘can we do this’ become easier, questions about ‘should we do this’ become more complex. These dilemmas are not unique to surgery and intervention: decisions about (for example) starting renal replacement therapy are similar. 

Differing perspectives and different information is needed to answer questions about ‘can we’ and ‘should we’. 

‘Can’ questions tend to be technical. Is the procedure possible? How might we do it? What are the immediate peri-procedural risks? Outcomes of interest are usually focussed on procedure, process, operator or institution. They are easy to measure and so are supported by a large literature of mainly cohort and registry based data. We know whether we can or not.

‘Should’ questions are more nebulous, patient centred and frequently more philosophical. Why are we doing this intervention? What are we trying to achieve? What outcomes are relevant and for whom? What do patients think? What does this patient think? What happens if we do nothing? And more widely, is it worth the cost and on whose terms? Producing evidence to answer these questions is much more difficult. It requires engagement with patients, long term follow up and an assessment of natural history of both patient and pathology with and without the intervention. This is time consuming, expensive and sometimes ethically difficult where professional norms have progressed beyond the evidence base. Research methodology advances such as the IDEAL collaboration (and perhaps routine dataset linkages) are mitigating some of these implementation barriers but adoption is slow.

Furthermore, even if we have long term meaningful outcome data for a cohort of patients, how does this relate to our patient? Risk prediction models and decision tools are frequently inaccurate or so cumbersome as to be clinically useless (or both!)

The relative absence of ‘should’ information makes informed consent with a patient about whether to proceed or not very difficult. Inevitably fundamental personal, professional and philosophical inclinations will colour evidence interpretation and consent conversations. It’s easy to influence a patient toward a favoured decision, especially if they are minded to leave the ultimate decision to the doctor. Clinicians vary in their willingness to intervene: some are more conservative than others and it’s a well trodden path that youthful exuberance is tempered as a clinician gets older. We should be honest (with the patient and with ourselves) to what extent our personal philosophy and practice style impacts shared decision making.

But it’s not just personal philosophy that influences decisions in the common situation where the evidence base is limited. Setting aside the distorting influence of medical device marketing (conservative management rarely generates revenue), there are structural features within healthcare which promote intervention over doing nothing.

Specialties like interventional radiology exist (unfortunately still) almost entirely in the context of doing procedures, rather than managing patient pathways. Therefore not doing something can be seen as an existential threat. What value is added if not by doing interventions?

Secondly, with the best intentions, we benchmark some outcomes as failure, creating pressure to avoid them at all costs. But heroic interventions to prevent these eventualities is not always good healthcare. Major amputation in peripheral vascular disease is not always inappropriate. Dying with, or even of, a malignancy does not always represent a bad outcome.

Third, there is a psychological tendency, when faced with an individual at risk, to see doing something as a moral obligation and always better than doing nothing even when the benefit is uncertain. When the risk of intervening (‘can we do this’) is very low what is there to lose? The obligation is exacerbated by public perception of modern healthcare as almost omnipotent. This ‘Rule of Rescue’ is a powerful motivator for healthcare staff who are expected to treat, and for patients who expect to be treated.

Finally, follow up is often delegated away from the operator. Without the immediate and personal experience of our patients’ outcomes longer term, it’s easy to equate safety with efficacy but this is a reductionist fallacy. A person’s outcome depends on more than the successful management of a single pathological entity. The literature is replete with examples of successfully performed interventions that make no difference to outcome. And there is ample evidence that clinicians often overstate the benefits and understate the risks of the interventions they offer.

These structural features nudge modern healthcare towards ever increasing intervention. But in the rush to intervene we lose sight of the option of doing nothing and thereby risk becoming more technical, less humane and ultimately less effective as healthcare providers. Sometimes (often?) a sensitive conversation, compassion, empathy and reassurance are preferable to virtuoso technique. The Fat Man’s 13th Rule is as relevant now as it was 40 years ago: The delivery of good medical care is to do as much nothing as possible.

My general position has slowly become increasingly conservative and I declined to do the portal vein embolisation after a long discussion with the patient. The referring surgeon (someone I respect greatly) and I had a ‘frank exchange of views’. Fourteen months later the patient remained well and his lesion was no larger.  Clearly this single anecdote cannot be used to justify an overall philosophy: he might have died of the lesion. He may still. I don’t think that is necessarily a failure. 

What do we mean by need?

The NHS Constitution states that access to its services should be based on clinical need and not on an individual’s ability to pay and that the NHS should provide a comprehensive service, available to all. For many people in the UK, these are articles of faith: fundamental organising principles that underpin one of the great achievements of postwar British society. They seem, on the face of it, to be unassailable: who could argue? But they beg the questions: what do we mean by need? What is included in a comprehensive service and why?

My dictionary defines need as being in want of something, or to require something ‘of necessity’. In discussing need in the context of organising the NHS we should describe what this ‘something’ is. Do we mean health or healthcare (or something else)?

In 1948 the WHO defined health as not only the absence of disease or infirmity but as a state of complete physical, mental and social well-being. This definition has been subject to criticism for the somewhat vague language (‘well-being’) but also because it excludes people who consider themselves healthy who nevertheless have ‘disease or infirmity’ (for example those with disability). More recent definitions of health describe it more in terms of a resource: one of a number of physiological needs to facilitate a flourishing life.

Health-care is more prosaic. It’s the prevention, management or cure of disease and is therefore defined much more narrowly than health. If healthcare is effective it can result in better health. Other means of achieving better health are sanitation, workplace safety, attention to social, environmental and behavioural factors (eg. smoking campaigns, seatbelt legislation) as well as broad public health initiatives such as vaccination and antenatal care.

Making a distinction between health and healthcare is useful as it allows us to think about healthcare more instrumentally. Healthcare is a means of satisfying a need for health, which in turn allows us to flourish. Other than its practitioners (who rely on it for their income or status), nobody has a need for painful, intrusive, embarrassing and inconvenient healthcare of itself. Thinking about healthcare in this way dilutes the emotional response we have about its provision and funding and allows us the cognitive space to consider whether healthcare interventions are valuable and for whom.

But before we consider whether particular healthcare interventions meet our health needs, we should ask ourselves what our priorities for health are (where ‘we’ and ‘our’ refer to society at large, not doctors, technicians or patients with vested interests in particular conditions). Even if individually we would like to remain in perfect health for ever (there are good philosophical grounds for thinking that immortality is not necessarily to be desired) we recognise that this is impossible. What then is a reasonable health expectation? How does this individual expectation accord with providing a fair distribution of health across society when doing so requires resource. Is this even something we are interested in achieving?

Answering these questions requires us to make some moral choices about what we value individually and collectively. For example should we value better health for everyone at the expense of increased health inequality? Should we value efficiency (more health) over less efficient targeting of those in poorer health? Do we prefer health gains to the young or the old, the ill or the healthy, the rich or the poor, the productive or the unproductive (however you define that)? Is it better to produce small health gains for many or large gains for a few? To what extent should we penalise those who make adverse lifestyle choices (considering that these frequently are likely to be a product of social conditioning). Is equal access to healthcare the same as equitable access? If not, which is preferable?

To explore this, undertake a thought experiment:

Consider two groups of people with a condition meaning 5 year survival is about 70%. One group consists of average 79 year old males in the United Kingdom and the other consists of children diagnosed with a soft tissue sarcoma. Do both these groups have equal need for health? Now imagine there are expensive treatments that can increase the median life expectancy of both groups by 1 year. Ought both treatments to be funded in a comprehensive health service? If we can only afford one, which group should be prioritised? Why?

So need for health, at least in part, is framed by our value judgements about what is fair, right or desirable. It is based on societal preferences rather than any empirical or underlying ‘truth’. Need is defined by our judgements about what is important.

Perhaps that seems a bit nebulous, theoretical and not particularly helpful in organising healthcare. While it’s possible to quantify some value judgements, the resultant metrics are incomplete, crude and only reflect the choices of the people surveyed in their creation (for example the QALY). Maybe we’d get further if we go back to considering healthcare as the primary need. After all, this is the service people access and experience in order to satisfy their subjective need for greater health. If we choose to define need as an entirely subjective experience, need becomes equated with demand. Might demand be a better measure to determine what healthcare society ought to provide?

Neoclassical economic theory deals much more with demand than need.  Distribution of a commodity (in this case healthcare) is determined by familiar marketplace concepts under assumptions (amongst others) that individuals are the best judge of what is best for them and that they will act to maximise their welfare. In this view, the distribution of health is determined by market forces and individual decision making within this marketplace. 

There are several objections to allowing market forces and demand to be the arbiter of need in healthcare. The most potent of these is that there are multiple inherent conditions in the provision of healthcare which predispose to market failure.

  • There is a marked information gradient between the customer (patient) and provider (the doctor or the healthcare institution). The customer is therefore not well placed to make a judgment about what is in their best interest. While it is to be hoped that medical professionals are honest brokers, they are nevertheless subject to unconscious biases, the making of assumptions about what patients think and personal, professional and cultural pressures (such as fee-for-service or intellectual investment in some technologies).
  • The demand for healthcare may be heavily dependent on its supply, not on a fundamental underlying need (supplier induced demand). The rolling of block contracts year-on-year is an example of this: future supply is planned on the basis of existing resource and infrastructure rather than on a reassessment of ongoing necessity. It is also evident in the development of new technologies which sometimes seem to be driven by professional and commercial interest rather than a true needs assessment, resulting in treatments apparently in search of a disease. Do we need nanoparticles to reduce restenosis in dialysis fistulae? Or endovascular robots? Or SIRT for advanced hepatocellular carcinoma? Maybe, maybe not. Innovation can be transformative (the iPhone, triple therapy for H. pylori, the COVID vaccines) but it can also create demand without reaping any (or enough) health benefit.
  • Health and wealth are correlated. The more wealthy live longer and are more healthy at all stages of their lives. The most healthy are best placed to demand healthcare and the least healthy are the worst placed to demand it, so demand does not reflect lack of health. Making healthcare free can mitigate this differential demand, but does not abolish it entirely, particularly for utilisation of secondary care. This leads inevitably to market inefficiency and widening health inequality.

Even in a functioning market, there is no reason to assume that individual people’s demands for healthcare (and maximisation of their personal health) will result in an overall societal improvement in health or its distribution in a manner we consider important. This remains true even if the healthcare demanded is effective and cost-effective. Demand has no moral or socially determined component. It is purely a function of individual wants and preferences, the drivers of which may or may not be things society values. Individual preferences may be (amongst other adjectives) altruistic, well-meaning and informed or they may be selfish, bigoted, ignorant or cruel.

For these reasons we cannot rely on demand as a valid surrogate for need or as an organising principle for healthcare.

So does this analysis get us anywhere? 

Need is a value laden concept. It speaks to a lack of something important, and fulfilling need brings with it ideas of altruism, charity and obligation. But without some clarification, this construction of need and our response to it is not much use in determining priorities for healthcare provision. Is there anything useful we can derive from a discussion of need?

As a first set of simple principles, it seems axiomatic that a healthcare intervention must be effective before it can be needed. There can be no need for ineffective healthcare. Healthcare should also be as efficient as possible in improving health. This means we maximise health gains with available resource. We therefore, as a minimum, should demand healthcare is both effective in improving health and at least surpasses a minimum baseline cost-effectiveness before it can be considered as needed. 

Beyond this, organising healthcare according to need depends on a value framework that we should ideally make explicit. In such a framework, lack of health does not necessarily imply need of health (or healthcare), with obvious implications for the concept of comprehensiveness: recall the two groups in the thought experiment. Need is determined not by what a persons health is but by what we are prepared to do about it. It is forward rather than backward looking. It is neither subjective nor objective: rather, it is defined by society’s collective values.

When healthcare resource is limited, even effective and cost effective healthcare may become unaffordable and it may be efficient and equitable for some needs to go unmet. How much we should prioritise health needs at the expense of other priorities such as the education of our children, security, a fair and resourced judicial system, welfare or the protection of the environment is a much wider, though analogous, question. Our health needs exist within a much broader context than that of health alone.

Further reading:

This blogpost was heavily influenced by a collection of essays by Tony Culyer, Emeritus Professor of Economics at the University of York, collated and printed as “The Humble Economist

Some personal reflections on AAA guideline development with NICE

The development of the NICE guidance on the management Abdominal Aortic Aneurysm [AAA] has been a long, drawn out and difficult process. Publication of the final guidance (in Spring 2020) was overshadowed by the immediate (and ongoing) crisis created by the COVID pandemic which meant the revised recommendations did not get the scrutiny they deserved. While editorial comment was made about the surprising nature of NICE’s U-turn, the opportunity for wider debate and discussion has necessarily been lost.

My colleagues on the NICE AAA Guideline Development Committee [GDC] have recently published our (rejected) final recommendations on the repair of unruptured AAA. These were revised from the draft guideline after taking into account stakeholder feedback and NICE’s view on implementation but they were unacceptable to NICE. The basis for this was not made clear and the process by which NICE derived the recommendations it eventually published remains remarkably opaque.

The purpose of this blog post is to provide a personal perspective on my involvement in the guidance development process and to offer some suggestions for a way forward.

Since I was appointed to the AAA GDC, and particularly since the publication of the draft guidance in 2018, my involvement with NICE has been a significant professional and personal challenge. By and large (and with a couple of notable exceptions) I have not been subjected to the opprobrium that some of my colleagues on the committee have had to deal with but the behaviour of NICE’s senior management has left me with a deep sense of frustration, even anger at work taken for granted or ignored and for opportunities missed.

The gulf between the evidence base for the elective repair of unruptured AAA and current UK (and international) practice created a problem for NICE which was always going to result in difficult guideline implementation and professional acceptance. The complete proscription of standard EVAR in the draft guidance was a substantial shock to the vascular surgery and vascular radiology community and resulted in much tension and professional anxiety. This was evident in the stakeholder feedback received, which prompted a thoroughgoing review of the draft recommendations.

The health economic argument against EVAR for people fit for open repair is undeniable and I was (and remain) entirely content with the recommendation that EVAR not be offered to this group (whether you agree depends largely on whether you think cost-effectiveness is a reasonable basis for limiting access to an intervention). However, for patients unfit for open repair, my personal view was that there were cogent arguments for changes to the draft. In particular I had concerns about the generalisability of the randomised data to the whole of an almost certainly heterogenous population, and the difficulty applying population data to individuals in whom the alternative was no-repair. However, actually formulating revised recommendations incorporating these arguments was extremely difficult and I was unable to persuade anyone else in the GDC that my clumsy suggestions for rewording were an improvement. Ultimately, my GDC colleagues convinced me that minor revisions to the draft we made were reasonable and I was happy to accept cabinet responsibility for them. 

However, NICE were unwilling to publish the revised recommendations on repair of unruptured AAA, and an impasse was reached in spring 2019. To resolve this, over the summer of 2019, the GDC made considerable further efforts to revise them into a format acceptable to NICE’s senior management, and incorporating the themes raised by stakeholders and NICE’s implementation concerns. These are the recommendations the GDC has recently published and I think they are excellent. They have the unanimous support of the whole GDC, with no dissenting voices. 

During this time however (summer 2019 onwards), NICE abandoned and then (apparently deliberately) sidelined its GDC. One can argue the extent to which this behaviour failed the public in producing suboptimal recommendations on elective AAA repair. But NICE certainly failed in its duty of care to the committee members who received no support and only cursory communication and explanation. NICE simply asserted its right to editorial control over the recommendations it publishes – it has never previously exercised this right. It is astonishing NICE did not make more effort to engage with its GDC over 2019 to find a mutually acceptable set of recommendations. I am left with the sense that I, and the other GDC members, were deemed irresponsible and uncompromising absolutists when nothing could be further from the truth: despite NICE’s disengagement, we made huge efforts to create recommendations that were consistent with the evidence base, stakeholder comment and the published philosophical and ethical frameworks within which NICE requires its guidance to be developed.

I am also left with a sense that there has been a missed opportunity to influence repair strategies for unruptured AAA. The final guidance NICE published on this is bland and anodyne to the point of being meaningless. The cynic in me thinks this is deliberate: recommendations that don’t recommend anything allow practice to continue without amendment or cultural shift. Perhaps this is a convenient outcome for those to whom NICE seems to have turned, once it abandoned its GDC.

For example, NICE’s final (published) recommendation about repair for people unfit for open surgery: 

1.5.5 Consider EVAR or conservative management for people with unruptured AAAs meeting the criteria in recommendation 1.5.1 who have anaesthetic risks and/or medical comorbidities that would contraindicate open surgical repair.

How should a clinician consider this? What evidence and perspective should they bear in mind when making a decision with the patient? What information should the patient be offered? Does this recommendation contain anything that will help vascular specialists amend their practice to mitigate the marked regional variation in the management of unruptured AAA in the UK (variation that surely cannot be explained by case-mix)? While there is some evidence that the draft guidance has resulted in a small increase in the proportion of AAAs repaired with open surgery, whether this endures remains to be seen and there is nothing in the guideline to lock it in.

These concerns were not just mine. Ultimately the whole GDC was in agreement (again, with no dissenting voices) that NICE’s published recommendations about repair of unruptured AAAs neither reflect the evidence nor (even) NICE’s own narrative accompanying the recommendations. Given the GDC’s professional and lay diversity, this unanimity is striking. We all agonised over the decisions we made, but all independently reached the same conclusion. This was not some kind of bunkered groupthink.

Despite all this, there are some significant positives to be taken from my experience with NICE. even if I remain dissatisfied with the process and aspects of the eventual outcome. I enjoyed meeting colleagues I would otherwise not have met, and in particular I enjoyed the careful and academic consideration of the evidence base and the challenging of some of my preconceptions. NICE’s technical teams and information specialists are impressive and the clarity and precision they brought to committee discussions was very enlightening. Obviously I now have a valuable understanding of how NICE develops guidance and some of the compromises involved.

My experience also prompted a hitherto unknown interest in health economics and in particular the ethics associated with using this in decision making about health and healthcare interventions. What do we mean by need? How do we balance choice, affordability, cost effectiveness and equity? When demand in healthcare is substantially supplier driven, how do we prevent market failure? What value judgements do we make or need to make when allocating resource? These are big questions and set against them, and the huge challenges of the COVID pandemic, a spat about elective repair of unruptured AAA seems insignificant. But the themes raised when thinking about provision of AAA repair are an illustrative worked example in microcosm. Recovery from the pandemic urgently requires that, across the entire NHS, we allocate resource where it is most effective, rather than on the basis of special pleading or the misguided notion that doing nothing represents a medical or moral failure. I’m aware this is not everyones cup of tea!

Where do we go from here?

I think we need to consider carefully the language we use about AAAs and their repair: language has a powerful effect on thinking and constructs meaning. Elective AAA repair is predominantly an exercise in risk factor management in a multimorbid population, not a life saving intervention. While there has been a (thankful) move away from ‘ticking time bomb’, words like ‘threshold’ and ‘turndown’ imply a necessity for repair that is unsupported by the evidence base. They create a psychological momentum toward intervention that takes conscious effort to halt. ‘Likely (or unlikely) to benefit from repair’ seems more appropriate, though it does not trip off the tongue.

More importantly, we need to understand more about AAA as a disease, not just about EVAR or open repair as ways of treating it. This is an essential shift in emphasis. Until we know more about the contemporary prognosis of people with AAA we will be unable to make decisions with them about whether repair is worthwhile (by whatever criteria we choose: clinical effectiveness, cost effectiveness, patient satisfaction or something else). Focussing solely on research into the outcomes after repair (with any technique) will always fail to provide an answer to the first question that we should ask: will this person benefit from having their AAA repaired? The paradox is that it is impossible to investigate the natural history of AAA if we continue to repair nearly all of them. And it is also possible that once we know the prognosis for people with AAA, fewer procedures will be undertaken: a professional problem for vascular specialists who enjoy the technical challenge of AAA repair or have built a career on it, and a financial one for the medical technology industry. Will we like what we find?

The future of AAA repair is in our hands. We can choose to focus on AAA repair technique or on the AAA and the person with it. So next time you go to a conference, or attend a seminar, a webinar or a course ask yourself what is being addressed: technique or disease? When key opinion leaders speak at tentpole events like CIRSE, LINC, Cx, SIR and BSIR ask them: “Who should not get repair and who should?”, “Does cost effectiveness matter?”. Do their answers convince you? Are these questions they are interested in? Only by challenging can we shift the frame of reference.

As for NICE, I still believe it is of immense value as an organisation. But if you do get involved (and despite everything I would probably recommend it) be prepared to have some of your idealism and enthusiasm tarnished by political expediency. I wonder whether the medical profession and wider society are ready for some of the conclusions that flow inevitably from a consistent implementation of NICE’s principles.

Further reading:

A response to Balancing evidence in guidelines – an essay (BMJ)

NICE’s AAA guideline – an unexplained U-turn (BMJ opinion)

Two responses (from NICE and from the VSGBI) to the BJS editorial referenced in the first paragraph

Tips for IR training and beyond unconscious competence

In order to master a skill, a trainee needs to progress from novice, to journeyman to expert (a process beautifully described in Roger Kneebone’s excellent book: ‘Expert. Understanding the path to mastery’). Typically four stages of competence are described: unconscious and conscious incompetence, followed by conscious and then unconscious competence.

If you are a trainee, your trainers will be most anxious about you at stage one (unconscious incompetence) – this is where you’ll be kept on a tight leash, especially if they decide you are overconfident. The most important thing your trainers want to know about you is that you know when to stop and ask for help. If you’re not aware of the limitations of your skill you’ll make an error and that makes trainers nervous because they carry the responsibility. The Dunning Kruger effect can manifest here. It’s a difficult time for you, because everything seems bewildering, but don’t try to impress by biting off more than you can chew. Listen, watch, ask questions, be keen and present. Take on small simple tasks with direct supervision and don’t be afraid to stop if you get out of your depth. Asking for help is not weakness, it’s strength. Your trainers should support you in this, and if they don’t they are not good trainers.

Stage two (conscious incompetence) is depressing. You’ll be able to do some limited stuff, but the enormity of the knowledge and skills left to acquire will have become apparent. The only way to get through this is by hard work. Slowly but surely you’ll get better: more knowledgeable by reading and more technically proficient by doing. The amount of direct supervision you need will slowly decline, though you’ll still need some. You’ll also start to develop some of the essential non-technical skills you will need to succeed by forming relationships with colleagues, developing your bedside manner, observing others and how they behave and by experiencing the way your department runs. Moving around between sites can be difficult as you’ll need to convince a new set of trainers each time that you’re not still at stage one. But moving can be a bonus as you’ll perceive subtle differences in how departments work that you can absorb or discard as you like.

By stage three (conscious competence) you will have started to find your feet and feel more confident. It’s still an intellectual and physical effort to do cases but you feel as though you’re developing. Each case is different, interesting and presents new problems for you to negotiate and learn from. You can take on new challenges because you can see their scope and understand the fundamentals of how to address them. As you develop people start to ask your opinion, or you feel you can disagree with your trainers about how something might be done and legitimately defend your position.  I think most new consultants start in the middle of this stage. The conscious nature of their expertise also makes them ideal trainers because they can still explain how and why they are doing something. This time can be really fulfilling: you feel that, finally, after all that hard work, you’ve arrived.

Then slowly, imperceptibly you will progress to stage four: unconscious competence. The cases no longer require the same degree of intellectual effort as before, decision making starts to become automated. You can do more than one thing at once. The difference (for me) is the difference between knowing and feeling. You develop and trust a ‘spidey-sense’ for when something’s not quite right. You start to do things that you cannot explain (and because of this your usefulness as a teacher starts to decline). Your key decision nodes start to reduce and you get faster and more assured: maybe some of that imposter syndrome starts to dissipate. You start to feel you belong, in part perhaps because your colleagues (locally and nationally) start to become less the people who trained you and more the people who you trained with. You start to design and own the pathways and processes for patients, trainees and organisations, rather than just following someone else’s rules. You negotiate uncertainty with increasing confidence.

One of the challenges at the stage of unconscious competence is focus. When you no longer need to think about what you are doing some of the time, you can allow other thoughts to impinge on your cognitive bandwidth. Maybe you’re working on a paper, or for a national committee. Maybe the frustrations of working for a large inefficient organisation get you down. Maybe you are navigating the joys of being a new parent. Maybe you’re becoming bored. If professional or personal issues are no longer crowded out by the task at hand, without vigilance they, in turn, can crowd out your concentration and performance. Recognising this is hard and requires a self awareness that is rarely taught.

The longer you stay an independent practitioner, the greater the risk that your practice can drift precisely because you are independent. How do you know you’ve not drifted back toward incompetence in some specific regard if you’ve unconsciously switched off and stoped thinking, started to take short cuts, stopped caring or started to burn out, if you don’t read the literature critically (or at all) or fallen into one of many other potential professional pitfalls. Unlike aviation, medicine has avoided high stakes summative assessments of performance and by the time your practice has drifted sufficiently to affect your outcomes (to the extent that someone will notice) it’s already too late.

The challenge then is to stay unconsciously competent as you finally become expert.

Being open within a supportive group of colleagues is crucial. A toxic working environment causes many issues but if colleagues cannot critique each other productively then each individual becomes increasingly siloed. A functional meeting at which errors, difficult cases and complications can be discussed without fear of censure is a huge departmental asset. Undertaking cases jointly with colleagues can also be useful for building and maintaining relationships and for sharing experience. It can also be fun.

But perhaps one thing we don’t exploit much in medicine is mentoring (or coaching). Atul Gawande discusses this in his 2011 essay ‘Personal Best’. A mentor is there only to watch you practice and to offer non-judgemental support, make suggestions and offer advice: a meeting of expert minds rather than a formal training relationship. It’s very different from a colleague working with you. A mentor is ideally someone you invite in, not someone imposed from outside. The trigger comes from the mentee and this requires them to recognise the need. Such mentoring relationships are difficult in practice as they take time and organisation, and they rely on personal qualities (in both the mentor and the mentee) like openness, tact, self-awareness and the ability to reflect. These are not universal traits. For a mentoring relationship to work we need, at the peak of our professional arc, to be humble.

Progressing through the stages of competence is a journey to be enjoyed for the flourishing it brings, not just for the end goal of a consultant (or attending) job. Lifelong learning does not stop with your Certificate of Completion of Training and knowledge and skills continue to accrue well into the later stages of your professional career. But the more senior you become, the more independent you become, more than anything, you need to know yourself.

PACS: the miracle and misery

I’m writing this sitting at a MacBook Pro, using Pages. The screen is simple and uncluttered, the buttons on the right are clear, and they do exactly what I expect of them. The ways I cut, paste, copy, drag, highlight or format my text are obvious and similar across most of the software I own. The only impediment to my writing then, is the failure my imagination. The process of getting my thoughts (when they appear) out and onto the page is almost effortless. A few keystrokes and there they are!

The human interface of modern well designed software is invisible, presenting no cognitive barrier to the user so they can focus their entire attention to the task at hand. This does not mean there is no learning curve, but learning the software is mostly also simple, even playful. Someone who has never seen an iPad before can pick one up and be using it in minutes. The wafer thin instruction manual is redundant. There is no need for an online learning session: playing with the device is enough, the software guides you and teaches.

Of course, this does not happen by accident. Modern software is the product of trial and error, human factors design, ergonomics and psychology: thousands of hours of development and many many iterations of the program, each building on the last (remember WordPerfect?). The key to success in this development is a focus entirely on the user. On what the they will need, rather than what the system architecture might favour. Steve Jobs famously insisted on rounded corners to buttons and had his team work tirelessly until the impossible was achieved. 

Which brings me to PACS.

As radiologists we spend nearly all our working hours in darkened rooms looking at computers running PACS software: these systems, huge databases storing terabytes of information, are a miracle of information management. Given the complexities of managing a database that is constantly being added to, queried, indexed and manipulated by hundreds of simultaneous users and medical image generation devices, 24 hours a day, 365 days a year, it’s astonishing they don’t fail more often. Modern healthcare could not function without readily available medical imaging and this would not be possible without PACS. These systems are a quiet revolution that has occurred within my working career (everyone who was an junior doctor in the mid- and late 1990s will remember wasted hours searching dusty libraries for moth-eared buff packets containing an almost certainly incomplete set of that patient’s filmed imaging). 

So why then do I hate almost every single PACS system I’ve worked with.

Maybe part of the answer is in the forgetting of how bad it was before. I just about remember reporting plain radiographs from film, but the only time I saw filmed CT images as a radiology trainee was in my FRCR viva. I do remember the on-call system being so slow that scrolling through an image stack was impossible and a system crash was sufficiently frequent that I ended up making paper notes for a while. Compared with that, modern PACS systems are amazing. 

But I think the main issue is the ergonomics of the interface which, unlike the software on my laptop, is frequently cluttered and busy, sometimes confusing and (most aggravatingly) not intuitively interactive. Relevant information is lost is a sea of metadata. Things that should take one click of a mouse, take several. Drag and drop works in some places and not others. Normal workflow requires cumbersome workarounds. Navigating the software creates an additional cognitive burden which over the course of a reporting session can become exhausting, even if it does not actually slow down my already sedate reporting pace. I am not convinced that this comes entirely down to money and the vast experience of international technology companies to tweak their products. Many small software developers are able to produce simple effective interfaces, even on small devices, as evidenced by what is available on App Stores.

It feels to me like the focus during development of the PACS systems I have experienced has paid insufficient attention to the ergonomics and configurability of the interface and on how a user will interact with the impressive underlying system architecture. Reporting environment and workflow preferences will vary from person to person but the tools to facilitate his are not new. Even Windows Vista had a setup Wizard. 

Perhaps I am being over-picky, spoiled by the quality of the ‘front end’ of modern mass-market software. After all a PACS workstation is orders of magnitude more complex than a word processor, right? There is bound to be a learning curve and maybe it’s not reasonable to expect to just pick it up? I think this is arguable when you compare PACS with complex software like Photoshop or FinalCut Pro. But when software is complex it’s all the more important to allow guided play with it to learn. And to play, the software needs either to be fun (maybe this is too much to ask for a Friday afternoon reporting session) or at least intuitive and in line with other software behaviours. Learning when frustrated is almost impossible. Software training only gets you so far. I find taught click sequences almost impossible to recall.

Does any of this matter? Does it matter that I find using PACS frustrating (and informal conversations suggest it’s not just me)? This comes down to where we started. If the software is invisible, I can give my entire attention to what I am trained to do. Starting an interventional case exasperated after battling to get the patient prepared and into the department means a mistake is more likely. So too wrestling with PACS to get it to do what I want surely increases reporting error and ultimately will affect morale, enthusiasm and throughput. 

Human factors are as important in software design as they are in the development of a safe cockpit for an Airbus. Ignore them and eventually a system will fail. I don’t need rounded corners on my buttons, but if this is a by-product of a relentless focus on the user experience, I’ll take them.

#IRad, Twitter and the dominant power of an image

Check out my TACE!

One of the great things about interventional radiology is the gratifying and instant feedback the imaging gives you. You can see where the blood vessel was leaking, your perfectly placed coil pack (or your embolic agent of choice) and the end result. If you’ve got some nice 3D reconstruction software, the pictures of endovascular aneurysm repair can be stunning. Identifying the feeding vessel to a liver cancer and injecting it with chemotherapy or percutaneously ablating a renal tumour can sometimes produce images that demonstrate such precision, it’s difficult to imagine why you’d treat in any other way. 

Armed with such beautiful images attesting to the technical successes of our interventions, it’s easy to see why sharing, and tagging them with #IRad, #CLIfighters or #clotout (to name a few) is tempting. We want to show off the great stuff we do every day, and advertise and proselytise about interventional radiology. But sometimes posting cases makes me uneasy, not only because of the confidentiality issues involved, but because just like Instagram, they represent a filtered and distorted version of reality.

Technical skills make up only a part of what it means to be a good interventional radiologist. Clinical assessment, communication with patients and colleagues, and the knowledge of when it’s better not to do something are also fundamental. I sometimes find trainees really struggle to engage with these skills, preferring the perhaps more tangible expertise of the angiography suite: doing stuff. A focus on imaging skews perception of what is important.

Then there’s a selection bias. It’s rare to see disaster posted (unless from that terrible institution: the ‘Outside Hospital’). But it’s also uncommon to see failure, error, a suboptimal result or simply the more mundane cases that make up much of our work. There is nothing wrong with celebrating our complex or impressive successes but they are not the only cases we can learn from and they don’t say anything about overall safety or effectiveness.

But finally for me there is a more fundamental issue. The emphasis on the technically impressive results in a subtle shift in focus from the patient to the operator. The patient is reduced to a cipher, an easel for the operator to display their artistry. Interventional radiology is not about the beauty or the brilliance of our imaging outcomes: it’s about people. And their outcomes are a lot more messy and unpredictable than the imaging. Some patients get better irrespective of what we do to them and some get worse despite our best efforts. Cancer progresses, angioplastied vessels go down, rebleeding happens and symptoms recur. It’s a Panglossian fallacy that just because something looks better, the patient gets better. Imaging can seem a convenient shortcut, but it is a surrogate of debatable quality. Cancer desvascularisation is not the same as survival and patency is not the same as symptom resolution. 

You might think this sounds rather pious and po-faced. Where is the harm (anonymity aside) in sharing our technical achievements when we understand the nature of the medium. But I think this underestimates the power of social media to distort our thinking.  A focus on the visually alluring competes for our attention with more objective sources of information, especially when keeping up with the literature is hard, time consuming and sometimes (admit it!) tedious. 

Whats the solution? Posting great cases can be inspiring and fun. But should we not share some failures too, or something non-technical, like a successful consultation, a difficult MDT discussion, a paper you’ve read, a decision you’ve struggled with?

If you post a case, give some clinical context and the outcome down the line. This is essential. Post a link to some high quality literature so the evidence for your intervention can speak for itself.  Maybe summarise the literature in a few tweets (and here: a gratuitous plug for #SeminalPapersInIR) or in an infographic like these. That way we all gain by the sharing of more than a 280 character anecdote. The immediacy and seductive power of your image is linked with the evidence base essential to keep IR safe, patient centered and effective. Used like this, social media in IR is an exciting and powerful tool.

My TACE patient survived for 22 months from the time the image was taken. What will you remember most about her? Her angiogram or the Kaplan-Meier chart predicting her fate?

Cost-effectiveness, art and science in medicine

I’m consulting with a man in his mid 70s: a retired teacher in an inner city secondary school. He used to smoke, but quit several years ago. He’s otherwise pretty fit: he has never learned to drive so walks everywhere and uses public transport. He is married, and his wife and he are still independent. They have grown up children who live a long way away. They go out to the theatre when they can and enjoy going to the local pub together and with friends. He’s got high blood pressure which is well controlled. He also has an abdominal aortic aneurysm (an AAA). It’s large. And he is worried about it. We talk about the options for repair. He is anxious about a major abdominal operation. He’s not dead set against it, but is concerned about the recovery, the impact it will have on his wife. On balance he is minded to have an endovascular repair, for which the AAA is anatomically suitable. We discuss the long term outcomes of open and endovascular repair. The need for secondary procedures and surveillance. He leaves the consultation undecided and plans to discuss it with his family.

I have spent the last 5 years on the NICE Guideline Development Committee, developing a guideline for the management of people with AAA. The results of several large trials comparing open and endovascular repair of AAA are consistent that while endovascular repair and open repair are safe (meaning very few people die from them), endovascular repair is safer by a small margin and gets people out of hospital and back to normal substantially more quickly than open surgery. However the longer term outcomes are not as good and beyond about 7 years, more people are dead after endovascular repair than after open repair. Because of this, endovascular repair is not cost effective, meaning the opportunity cost of providing it is too great and at a population level offering endovascular repair causes harm. Putting aside arguments about the contemporary relevance or methodological detail of the evidence, the only possible conclusion is that endovascular repair should not be undertaken if someone can have an open operation.

There are several problems with accepting this analysis. One is that it requires a belief that savings made in one place in the healthcare economy will be realised in a benefit elsewhere: something called Pareto efficiency. This might be an intellectual leap too far for some in the NHS of 2020. But perhaps the biggest issue that it is a cold joyless analysis – a faceless functional accounting, the reduction of individual encounters to marks on a Kaplan-Meier chart. It is the science in medicine, but medicine is more than science. It’s an art. For me some of the joy of practicing medicine is in this art: the ability to synthesise the evidence into a narrative a patient can engage with, helping them work out the best option. What do I do then, when having done this, my patient’s preferred option is not cost effective, when the art and the science collide? How do I decide between the interests of a real person sitting before me, vulnerable, perhaps anxious, who trusts me to make decisions with them, for them and in their best interests and the interests an unknown person (or group) with whom I have no relationship – the people who will theoretically benefit if I choose the science over the art. How do I decide between my patient and society?

Since the publication of the draft NICE guidance on AAA, which recommended against endovascular repair partly on the basis of cost effectiveness arguments, I have defended and explained the decisions the guideline committee made at conferences, in conversations and in print. I have posed questions to conference panellists who were challenging the guideline about whether they think cost effectiveness is an important consideration when deciding a therapy and received frequently unconvincing responses. But in the back of my mind, I’m questioning: are they right? Are the unseen consequences of my clinical decisions (for unknown people) my responsibility or are these too distant from me to consider. 

Within the contexts by which NICE asks its committees to make decisions, the answer is clear: they are not too distant, and must be taken into account. But while it may be the right thing to do, to follow the science and deny treatments that are not cost effective, it can feel wrong. It’s a depressing analysis to frame the future for individual people in terms of population outcomes. Individual characteristics, ambitions, concerns and expectations, love, beauty and hope are subordinated to the inevitable logic of the data. This feels like a betrayal of the doctor-patient relationship which is ultimately a personal one: and guidelines are implemented at a personal level, patient by patient. 

This is not a new dilemma, either in medicine or in economics in general. It is a version of ‘the tragedy of the commons’. To some extent patients, the public and physicians understand it, in the general acceptance of triage in emergency departments, or waiting lists for elective treatment. But the idea of not offering treatments at all, solely on the basis of a cost effectiveness calculation, seems to be too much for many people in their role as clinicians or patients, even if it makes sense in their role as taxpayers. In fact it is probably impossible (and certainly not desirable) for clinicians to make bedside judgements on the basis of cost effectiveness, not only because of how it makes them feel, but also because it would undermine the trust central to the relationship with their patient. Solutions to the dilemma attempt to constrain the options available to clinicians either by incentivising certain therapies, or by limiting choice to cost effective options only (by either not funding the alternatives or by creating guidelines). In order to allow these solutions, clinicians need to accept that some decisions are taken out of their hands. But in doing this, it can seem as if the art in their practice is reduced to a near irrelevance. Perhaps this, then, is one of the reasons for the criticism, even resentment, of some of NICE’s draft AAA guidance.

Where does this leave us? We are caught in the cleft stick of increasingly costly technological advances in healthcare, wanting to offer them to our patients where appropriate, but understanding that ultimately resources are finite. We will need to face this as a society and as individual clinicians sooner or later unless healthcare costs are to escalate uncontrollably. We surely need to understand that the technical advances that interest us may not be in the best interests of society at large, and accept this where it is the case. We need to allow trusted organisations to make these decisions for us, within an open and transparent process, and we need to allow that the art in medicine (and our joy in practising it) is retained but refocussed to help our patients navigate their therapy choices within the constraints imposed on us by those trusted organisations. NICE’s failure to stand firm on its principles in respect of this aspect of the AAA draft guidance sets an unfortunate precedent and makes these issues more difficult, not less.

My patient returns to clinic. He has spoken with his family and his wife. He wants an endovascular repair. Cognitive dissonance rages within me. I take a deep breath in, and begin….

COVID: time to debate our values

For my first post, I’m adding to the thousands of column inches devoted to the COVID19 pandemic. I’m not an epidemiologist, a virologist, a public health expert or even someone whose practice has been substantially affected by COVID19, other than being unable to treat the patients I would otherwise treat, so I don’t claim any particular expertise. But ever since the pandemic hit the UK in earnest, I have felt a nagging doubt that to find the answers to how to deal with it, we are looking in the wrong place.

Two recent publications offer very different visions of the way government and society should respond to the challenges of Coronavirus.

The Great Barrington Declaration emphasises the harms associated with lockdown, for physical and mental health, for jobs and for the economy. It argues for an opening up of society to allow people at low risk of harm to live normally while shielding the vulnerable. The declaration has attracted vehement criticism, not only for its “libertarian agenda” but also for its unreferenced assertions and (the critics say) lack of scientific validity. Is it possible to shield the vulnerable? Is herd immunity possible?

The John Snow Memorandum presents a more conventional and mainstream view. It summarises (and references) what we can be sure about. It argues against allowing an uncontrolled outbreak in those at low risk, emphasising the challenges of shielding (sometimes large) vulnerable groups and the likely human cost in lives lost to COVID. But critically, it too makes unreferenced assertions about, in particular, the socioeconomic effects of an uncontrolled outbreak.

How do we reconcile the differing visions offered in these 2 statements? The evidence about COVID19’s biology only seems to get us so far. We know how it’s spread, we know it is highly transmissible, we know it is an order of magnitude more lethal that flu for some identifiable groups, and we know some of the mitigations we need to put in place to reduce its transmission, therefore the number of lives (or years of life) lost. We count in minute detail the infections, admissions and deaths attributable to the virus and we can see the effect of some lockdown policies on these numbers. But there is much we don’t know, in particular about the effects of our response. What is the effect of lockdown on the health, wellbeing, education and socioeconomic status of the population of people who, critically, would otherwise be unaffected by the virus and who therefore only experience the harms of the remedy? These effects are more difficult to measure and are therefore relatively invisible. They may arise distant from current events and be subject to many confounders though some assessments of health-economic impact of both COVID and lockdown have been attempted. Critics of lockdown suggest that its cost is orders of magnitude more than current ‘willingness to pay’ thresholds and that it is therefore not justifiable on this basis. Others argue that more effective lockdowns mitigate the economic effects of the virus and (in effect) that things could be much worse. A meaningful reckoning is impossible in the short term and both sides of the argument are complicit in extrapolating beyond the evidence in suggesting the balances of benefits and harms of differing strategies favour one argument or another.

Perhaps an examination of our values can help? Maybe our attitudes to the huge changes in society imposed by governments worldwide can be better explained by what we consider important, rather than by the imperfect and incomplete biological and economic science we have on COVID19. Here are a few questions:

  • Do we have a moral obligation to protect the vulnerable?
  • If we do is this absolute, or are there circumstances where this might be negotiable? How long does this obligation last? Are there sacrifices that we are willing to make in the short term that become unbearable longer term?
  • Even if it were possible, would an accurate accounting of the benefits and harms of lockdown policies be sufficient for definitively choosing a (least worst) policy, or are there reasons to override such an analysis?
  • Should individual liberty and autonomy be subordinated to an externally identified collective good?
  • Should we allow story and narrative to play a role in determining policy?
  • Is it reasonable to ask front-line staff to implement, at scale, generic policy when faced with decisions about individual people? Who is responsible for the individual and personal moral hazard associated with a policy decision being implemented in practice? 

These are not new questions. The broad concepts of utilitarianism, libertarianism and egalitarianism have been argued over for centuries by moral philosophers. There will never be a right answer, but unless science comes to the rescue in the form of a vaccine, these questions will need exploring anew in the context of COVID, especially in the worst-case scenario of short-lasting immunity and the virus becoming endemic. Can we stay locked down forever, unable to socialise, meet friends, work or travel? Such a prospect seems intolerable in the long-term but this position reflects my personal values, rather than a dispassionate scientific analysis. Ultimately it is our value judgements about these moral questions, as much as the science, that should determine our response.

We should discuss these issues alongside the emerging scientific evidence, but this discourse seems surprisingly absent.

This is not an argument for procrastination to allow a few more centuries of academic moral philosophical debate about the choices on offer. Decisions need to be made now by us, or rather by our leaders. But if we understand the values on which they base their decisions we will find it easier to follow.