Senior physician and artificial intelligence expert – through this unique combination he moves in two separate worlds, that recently have caught each other’s attention. What does Markus Lingman have to say about the use of AI-tools within healthcare?
When Markus Lingman talks about AI, he makes it sound simple, or at least obvious. When asked: What exactly is AI? he replies: multidimensional non-linear mathematical models. No more, no less. We agree that the concept of AI is complicated and that it is probably best not to delve into definitions and technical specifications. There are a lot of things called AI today that are not AI, but as Markus says:
“We have to call it something, to be able to talk about the subject at all.”
Combining technology and medicine
Markus has become an advocate, or rather a translator, of AI in healthcare. With his unique combination of understanding in both computer science and medicine, Markus builds bridges between the areas, in a way that not many people can. The path to the award “AI-Swede of the Year” that Markus received last year, started with a master’s degree in engineering and continued with specialist training in cardiac care and research at Sahlgrenska Academy. Today he works as a strategist in the Swedish region Halland and is deeply involved in Leap for Life; an investment in information-driven care.
Explainable AI is the next step
Markus works for increased understanding and application of AI in healthcare and one of the things he is most interested in right now is “explainable AI” – the next generation of AI, where the system can explain to a person how it has arrived at its answer. It’s a big step from the “black box” we’re used to, where even the developers can’t answer why a model came up with a specific decision.
Building trust in technology
It has become clear that tools based on AI models are not evaluated like any medical device. There is a greater demand from buyers to know how it works. But Markus doesn’t believe that the way forward for AI in healthcare is for each healthcare personnel to understand exactly how it works (it is also completely impossible). But rather by building trust around the technology. Which is precisely the purpose of explainable AI.
“Transparency around technology is needed, but for human reasons even more than regulatory reasons. The ones who will use the technology must rely on the tools enough to use them”, Markus says and continues the reasoning:
“In healthcare, we already do a lot of things today that we don’t understand how they work. An MRI we know how to use and what result it gives, but few know exactly how it works. When we approve medicines, we do not have the requirement to be able to describe the function at the molecular level.”
How do you build trust in AI in healthcare today?
“Being able to explain on an overall level how the models work is good and to understand that healthcare is filled with humanists and that doctors are personally responsible for the recommendations they make. But also, that there are control bodies that vouch for the quality.”
In order to understand AI in healthcare, you need to understand healthcare and where it comes from, says Markus. There are many truths in medical research, healthcare organisations, and the medical profession that completely or partially clash with data science. Most obvious is the human perspective, but even when it comes to the view of research and statistics, the schools differ. Where medicine is based on hypotheses in the research method, data science is based on annotated data.
“Healthcare comes from classical statistics which, mathematically speaking, is a simplification of reality. AI, on the other hand, can consider more aspects, but at the cost of less comprehensibility, Markus says.
1072 variables are closer to reality
At the Hospital of Halland, Markus and his team built a model where they looked at the risk of re-admissions of patients who had been with them. They then basically put in their entire database of tens of thousands of variables and then let the model boil down the number of variables to an optimum. However, they landed in 1072 variables per patient that added to the model’s performance in some way.
“The important message here, I think, is that this is what reality looks like. Because if you only add 12 variables that you think matter, you do it so it can make sense – not because it’s similar to reality.”
While exploratory analyses is not enough in medical research, AI models are often based on the model itself finding the associations. The goal is to understand a more complex reality, and this is where Markus believes that artificial intelligence will be able to fundamentally change healthcare and help us achieve something really valuable in the longer term: precision healthcare.
The ability to manage the unmanageable
Artificial intelligence allows us to consider what has previously been impossible to consider on a large scale. The technology can handle huge amounts of information and probabilities and see patterns – for each individual.
“AI will play an important role in calculating risks and probabilities for you as an individual and not as you as part of a group. In the past, we have reasoned that if you have heart failure, you should have the heart failure medication that we give to everyone who has heart failure, because we have seen in studies that that group is doing well. Now we are moving towards: you have heart failure for exactly this reason and together with your other conditions, it is precisely this treatment that you should have to get the best chance of feeling better or living longer.
The hope is that the machines will help us achieve more human care. 100 years ago, the doctor knew all his patients. Then we went into “group health care,” based on randomised controlled trials. Very useful and successful. But now we have become even more people that need care. Perhaps we still have the chance, with the help of AI, to go back to more personalised healthcare.
Is human better than high-performance?
The only question is how good the tools based on AI must be for healthcare to dare to harness its full power? Pending regulations, whether fully automated medical decisions is legal by itself today and tools based on AI models are used only as decision support for the healthcare professionals.
“There are lots of studies comparing an AI model with the precision of a clinician that shows that it is about as good or a little better. Is that enough?”
Markus lets the question hang in the air.
Sometimes you may need to add human knowledge to AI models to compensate for certain weaknesses, such as new conditions. They are called hybrid models and can make the performance of the model worse. Sometimes the benefits of involving man are small and the loss of performance is big. That opens up an interesting discussion about what matters most; human or high-performing?
”We may find ourselves in a situation where it is illegal to use automated decisions, but unethical not to do so,” says Markus.
Guidelines today, markings in the future?
The societal discussion on AI in healthcare continues and the EU’s proposal for a regulation on harmonised rules for artificial intelligence (1) is out for consultation. In the meantime, WHO has come up with its Ethics and Governance of Artificial Intelligence for Health (2), which can guide buyers of med-tech towards the most patient-safe decisions possible.
One final question Markus: How do you think healthcare will relate to technology based on AI in the future?
“In 10 years, I don’t think you’ll think that there are AI models behind the decision support at all. There will probably be a quality stamp or some sort of marking system that tells you how good the decision support is and then you settle, as with any other tool in healthcare.”