The BIG Debate: AI in healthcare: Utopia or Dystopia?

This was the core question that a panel of experts wrestled with as part of the closing session of day one, chaired by EHD programme director Sunand Prasad. Providing a provocative opening case that AI in healthcare would lead to a more oppressive and less equitable future, Dr Paul Barach (whose affiliations span Thomas Jefferson University, the University of Birmingham School of Medicine, and Imperial College School of Medicine) delved into the world of TV and Hollywood to highlight his concerns. undefined - undefined

 

A 1964 episode of The Twilight Zone, ‘From Agnes with love’, was first shared. In the scene, the computer is so fascinated with the programmer that the computer sabotages the relationship between the programmer and his wife. Unable to get home owing to the computer’s mischievous interventions, he eventually gives up and remains with the machine. 

 

Produced a few years later and the classic film 2001: A Space Odyssey reflects humans fears that AI will take matters into its own hands if it suspects humans may disconnect it or try to switch it off. And right up to the present day, movies such as M3gan present a reality where technology has no difficulty whatsoever with the concept of harming humans. 

 

Dr Barach also pointed to the celebrated book by Isaac Asimov, I, Robot, published in 1950, which sets out the three laws of robotics, also considered to be the rules of AI: a robot must not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

 

However, as Dr Barach observed, if you move around the order of the rules, you get very different consequences – from a “balanced” or “frustrated” world to the extreme of a “terrifying standoff”, or a “killbot hellscape”. There are also all sorts of ethical issues around AI development in healthcare, too, and he emphasised the role of the Seoul Declaration in 2018. According to Dr Barach, this was “a manifesto for ethical medical technology, calling out the ethics of AI and machine learning in surgery, and signed by six of the major surgical societies around the world”.

 

undefined - undefinedPresenting the case for AI as a positive driver of change, Dr Charlotte Refsum, director – health policy at the Tony Blair Institute, argued that the rules of the game have changed.

 

“I think we have to recognise in the new AI era that our health services will become more fragmented, not less,” she said. “That’s because things like ChatGPT4 will allow medical advice to be more widely and cheaply available, and people will be seeking advice outside of the traditional places where they get healthcare. More people will consult online, through apps, and managing their long-term conditions through apps.”

 

Another benefit of AI is its ability to predict and understand risk from personal health data. “If you can predict, you can prevent,” she noted, while another aspect is agency. “I think that citizens will have much more agency about how they manage their own health, and we need to give them the tools to turn that into empowerment so that they can understand the difference between the advice that they get freely online or through an app and start to differentiate what is good and bad advice.”

 

Risk exposure is a further challenge. Said Dr Refsum: “Almost from the moment you’re born, you can have a pretty good guess about what you will die from and when. And that makes you almost uninsurable in a competitive insurance market, so now, more than ever, we need some kind of nationalised health insurance model.”

 

With all these issues, she added, governments around the world are thinking about how to support their health systems “to stimulate the development of AI, to regulate it, and then to assimilate it in a healthcare system that looks very different to the one we have now”.

 

Institutions and infrastructures

 

Indy Johar, co-founder of Dark Matter Labs, which works to create institutions, instruments and infrastructures for a more equitable, caring and sustainable future, sees the problem of AI and data partly as one of weak or ineffective institutions. “Often, we confuse the capital structures and legal structures that build those technologies with the technologies themselves,” he said. “One of the challenges is we’ve got 19th-century institutions building technologies in the 21st century.”

 

He added: “I think we’re already beyond the limits of human analogue organisations. You could argue this is resulting in systemic failure of actual provision of health.” undefined - undefined

 

He suggested that machine learning technologies will result in learning-based organising and organising systemically for learning as a means of optimising is the only way health systems can operate given the level of complexity. “The idea that we can do this without machine assistance is, frankly, not realistic,” he asserted. However, precision and accuracy are required in how we invest in these new technologies, and in how power, authority, control and biases are constructed. Currently, he warned, that debate is not happening.

 

The final expert on the panel, Dr Nirit Pilosof, head of research in innovation and transformation at Sheba Medical Center, and a faculty member at Tel Aviv University, backed Dr Refsum’s stance in that AI could be “a paradigmal shift in our conception of healthcare services”.

 

But she warned that the technology companies should not be allowed to drive the agenda and design the systems, calling on the healthcare design community to come to the table and lead. “We must be proactive in leading the change, the transformation, and it really requires a lot of design skills because the systems that are dependent on AI technology needs to be designed, just like we’re designing any service or building or application,” she said.

 

“Should we be designing more hospitals?” she asked. “Should we be discussing how many patient beds we need when we know that AI technologies and remote care can shift care outside of hospitals into the home environment and the community? If we can prevent and predict then maybe the whole concept of how we care for patients should be redesigned.”

 

Where does regulation sit?

 

Asked about the role of regulation as a means of controlling AI, Dr Refsum said: “Regulation in healthcare has always been about safety and effectiveness. Now we have to consider bias, hallucinations, privacy, cyber security, and all of these things, so you’re regulating in silos a lot of the time. In the ‘regulate’ part, it’s trying to chart a pragmatic course through all these types of regulation. Some of them play off against each other or there is a trade-off.” 

 

Johar observed that regulations come in several forms and layers. “If you use the analogy of curriculums in schools, which [help regulate] the learning frameworks, what is the equivalent of creating curricula for machine learning capabilities for healthcare? What’s the input data?

 

undefined - undefined“Most of the data on which machine learning algorithms are trained are 97 per cent male, white, and actually very narrow. So, there’s a real systemic problem, you might say, in nurturing the database. You’ve got to have a responsibility to reflect the nature of the patients you’re seeing.”

 

He added: “Also, let’s recognise our own failings. Most machine learning aggregative information is actually a failure of human input and human data input. So, they’re reflecting back to us our own weaknesses.”

 

Dr Barach considered the issue from the perspective of creating learning systems. “Learning systems don’t regulate,” he remarked. “Learning systems learn but they only work under certain conditions of truthfulness, transparency and trust. These aren’t abstract nouns but things that are measurable. These are things you can destroy, these are things you can entrust, so we know how to create better frameworks. Learning systems engender more learning and more trust. But we’re not in a greenfield site.”

 

Dr Barach suggested that the problem lies less with AI and more with how new technologies are introduced by management without appreciating the barriers, resistance, lack of trust, and the extra workload burdens placed on staff, leading to emotional, burnout and mental health problems.

 

“We need to think about how we design AI into healthcare so that it doesn’t cause those pushbacks,” he said.

 

Dr Pilosof saw the problem as one of failing to consider the introduction of AI as a way of transforming the entire model of healthcare delivery. Instead, she lamented, we’re just “digitalising the old systems, turning paperwork into an app, or a physical model into a digital one”.

 

She continued: “When I think of AI, I like to think of it less as artificial intelligence and more as augmented intelligence. So, it will not replace the people, but it will be helpful. And we don’t have any choice. Ageing populations, chronic diseases, a lack of manpower – we need to make a change.”