In this blog, I’ll argue that the way AI in healthcare has been approached has paradoxically been both too ambitious, and not ambitious enough. This has prevented a lot of easily achieved benefits from being realized, while also holding back the next generation of healthcare.


Part 1a – Healthcare AI: Grand Ambitions

Few fields are as aligned to technological development as medicine. It's fair to say that medicine as a practice has been transformed by technology and now completely relies on it across all its facets, like drug development, medical diagnosis, and augmentation with prosthetic limbs. It's been the source of new technology developments, such as MRI scanners, where doctors collaborating with scientists create previously unimaginable devices.

Medicine feels like it's supposed to be futuristic: Science Fiction bombards us with a gleaming white future of technology-driven medicine where we will never need to feel the cold hands of a doctor on our abdomen, and probably even the dentists have laid down their drills1. So it seems perfectly natural that mankind's latest and greatest technology, Artificial Intelligence, should be embedded in healthcare. 

How hard can it be? Those of us that tried to interact with a GP service in the lockdown could be forgiven for thinking the only tech needed to get most of the way would be a recording of a busy phone line alternated with a one of a slightly frayed receptionist offering vague promises about appointments being available in a couple of months2. So, across modern healthcare, surely there’s huge scope for AI to help? People agree, and some of the world’s brightest minds coupled with some of the world’s deepest pockets have set about making this come true.

There has been a success. For example, medical imaging has been successfully assisted with ML techniques, medical record processing can be improved, and AI can even point the way to a new understanding of health – for example, it can accurately predict if a patient is going to die, though we do not know how. However, it has not been plain sailing. When asked to compete directly against humans in novel situations AI has been a failure; for example, during Covid, AI Models did not help with the diagnosis or analysis despite much investment, and the transformation of front-line medical care with AI has seen some serious setbacks. 

1 I suspect the reason that Dentistry is never shown on science fiction programs is that no one believes dentists will ever give up their drills.

2 I’m teasing GPs in this blog a little, which I figured is safe as I’m unlikely to meet one in person.


Part 1b - Ambitions Thwarted

The specific problems the medical arena provides can be charted by investigating one of AI's greatest successes, and the source of much of our angst about its potential superiority: the arena of games. 

IBM’s Deep Blue beat the world’s best chess player Garry Kasparov in a single game in 1996, and in a tournament in 1997: the culmination of about 20 years of effort in the development of chess AI. IBM then developed DeepQA architecture for natural language processing, which, in 2011 and now branded Watson, was able to crush the best human champions at Jeopardy: an advance that was thought to be the one that could allow it to compete and win in human technical fields.
IBM Watson SpeechBy 2012 IBM had targeted Watson,  which was by then a combination of technologies they’d developed, in the Healthcare industry, especially oncology. 

Success looked inevitable: press releases were positive, reviews showing progress vs human doctors were published, and
Watson was able to consume medical papers in a day that would take a human doctor 38 years. I made a bet with a doctor friend that by 2020 the World’s best oncologist would be a machine. 

I lost my bet, but not as comprehensively as IBM lost their big bet on healthcare. The initial pilot hospitals cancelled their trials and Watson was shown to recommend unsafe cancer treatments. IBM Failure SpeechThe program was essentially shuttered, with Watson pivoted to become the brand for IBM’s commercial Analytics with the use of its natural language processing as an intelligent assistant.  Today, IBM’s share price is 22% lower than at the point of the Jeopardy triumph. 

 I've used IBM's Watson to illustrate the difficulties here, but I could have picked failures with virtual GPs servicediagnosticsor others. I'm sure organisations like these will succeed in the long run, but we can explore why some of these failures were likely.
To understand something of the scale of the challenge we can look all the way back to where the field started with the cyberneticists of the 1940s.
One cyberneticist, W. Ross Ashby, conceived several laws, one being his "Law of Requisite Variety". This law should be better-known, as it explains the root of all sorts of intractable problems in IT, from why large Public Sector IT projects tend not to go well, why IT methodologies such as PRINCE II mostly don't work, and why we should be very worried about our abilities to control super-intelligent AI. The law states that "Only variety can control variety". That is, if you have a system and you are trying to control it with another system, the control system must have at least as much complexity as the target system; else, it won't be able to cope with all its outputs, and there will be escape

In a game like chess, all the information needed to calculate the optimum outcome is included on the board – chess is hard, but the variety is not great. But in the world of front-line doctoring, there is incredible variety, and you need incredible complexity to supply the right outputs. This presents an immense challenge for AI: the real-world patients will be training material edge cases, but the AI would need to solve them effectively in one shot. We find they cannot, and escape is inevitable, such as the medical AI that agreed a patient should kill herself, or one that was solving problems but was maybe racist, or one that was definitely racist. Could a future medic's workday involve running the surgery, doing the admin, and checking if the AI assistant has had a racist incident? 

There is another problem in adopting AI into medical settings that probably has a technical name, but I will term it the 'Bus stop granny carnage problem'. If someone crashes their car into a bus stop and kills three beloved grannies, then it would be a big story on local news. If an autonomous car did the same, it would be a global news story, probably resulting in lawsuits and legislation. The point being we're currently much more tolerant of human fallibility than we are of machine fallibility, and the bar for automated technology outcomes is, therefore, higher than it is for humans. This is somewhat rational, as a single human can only do so much harm, but AI will scale, and so mistakes would be replicated. 

Ultimately, these barriers make it extremely challenging to introduce AI into front-line care to replace humans. But as we'll see, that doesn't necessarily matter: it can still provide huge transformational benefits


Part 2 – The Opportunity Missed

We have seen that grand ambitions with healthcare AI have not led to the revolutions we hoped for, but I argue that they also have led to the greatest areas of opportunity being neglected. Health Services are buckling under their own "Law of Requisite Variety" issue with the hugely complex range of treatments that modern healthcare now provides, relying on similarly complex administrative organisations of logistics, procurement, finance, HR, IT and so on. It’s the case that nearly half of NHS staff don't have medical qualifications, which sounds like a poor ratio -  it's 1:8 Administration/Manager: Technical in my organisation by way of comparison. In American healthcare, the ratio is worse still, and the trend is terrifying,  as can be seen from the chart below.


US Healthcare: sure there's a wait for physicians right now, but when it comes to billing you'll be amazed by the slickness of our operations.

Not only are the numbers a lot higher for Administrators compared to Physicians, but their growth rate is far higher. As already mentioned, this is to be expected as medical service complexity increases.

However, from this realisation, It follows that using AI to replace (123,000 NHS) Doctors is less beneficial than using it to replace (over 500,000 NHS) support staff. And this is vastly easier than the very high stakes medical side with the extreme barriers to replacement that I have already outlined.

Addressing this problem cannot come soon enough: the growth in healthcare complexity has led to healthcare cost increases becoming unsustainable in all advanced economies3. Options to address this should be prioritised now.

Research on intelligent automation indicates that a huge 40% of the admin jobs in the NHS could be replaced by current AI automation technology. Work I have been involved with at the NHS just using system replacement and process change has shown similar levels of efficiency saving. Such improvements are achievable; my estimates here are large but realistic.

AI Automated ProcessesAutomating processes using captured
data to establish intelligent bots

Data from processes is captured in the store and interrogated to design automated processes. 

The automated processes both consume (to refine process outputs) and add to the data in the store: allowing continuous process enhancement. Manual oversight/intervention/refinement of the automated processes is also possible, but a human is now covering many multiples of the process volumes of the manual ones.

The use of AI to improve logistics in highly complex and fast-changing environments is actually one of the most developed areas of AI: DARPA managing to recoup the entire cost of its 30-year AI programme in a few months with its hugely-successful DART project that showed how AI could be successfully embedded in wider processes to release orders of magnitude improvements.

 Another reason to automate in this way is that it can add resilience to a stressed system:   

  • Identify pinch points allowing humans to intervene
  • Compare the performance of similar units and identify issues/opportunities

Work by the NHS’s BSA group has used statistical techniques to identify savings in the NHS, primarily through anti-fraud measures. This has realised over 1bn for the NHS, which is a staggering achievement, tempered somewhat by the fact that there must be a commensurately staggering level of NHS fraud occurring (estimated £1.29bn/yr). Automating processes in this way will allow extremely tight tracking and trending to be incorporated, which allows the sort of anti-fraud techniques used by the BSA to be directly incorporated into the processes themselves, rather than applied retrospectively. Again, the use of Machine Learning to identify and reduce fraud is already well established. 

Individual hospitals often operate with a high degree of autonomy, even when part of a larger healthcare group (such as NHS Trust in the UK). This can lead to the faster development of best practices and improved responsiveness to local needs. However, it can also mean that it's difficult for those involved to know if they are really operating at a good level. At its worst, you can get scandals like Shrewsbury NHS Trust, where poor performance went on for years with tragic consequences. By incorporating automated processes, the comparison of units can also be automated, and it will be much easier to highlight performance outliers and either intervene or learn from best practices and apply them to other areas. The use of AI should also make these comparisons easier to action: there is evidence that in private doctors are very accurate in assessing one another's performance, but it is challenging to convert such information to improvement plans.

There is evidence that healthcare systems with better logistics have better clinical outcomes. And also that those healthcare systems are also better able to absorb shocks, such as maintaining normal services during the Covid pandemic. Therefore, we should expect clinical outcomes to improve as well with automated logistical support.

I think it will also be possible to improve the overall experience for patients and doctors with this sort of approach. This may induce groans from readers who've experienced many ‘improvements’ to customer service and could see this as an attempt to cook-up something even more annoying than an offshore call centre. But it does not have to be like that. With this sort of approach, the cost of keeping people informed drops to nearly zero. For the first time, missed appointments, rescheduling and keeping everyone informed about what's happening become removed as issues. 

The correct areas to prioritise Healthcare AI (Blue)

To surmise, pointing intelligent automation at the management and scheduling aspects of healthcare is a much easier prospect than in the medical arena, but could have a far larger overall benefit - offering one of the few options to get costs back to sustainable levels. My company, Qubix, is putting its money where my mouth is here and investing to build the means to achieve this transformation. If you want to collaborate, please get in touch.

3 Unsustainable means increasing faster than GDP.

4 I elaborate on how improvements from automated comparisons can be gained in the next blog section.

Become a member


Receive our newsletter to stay on top of the latest posts.