Can AI Explain Medical Mysteries?

Date

Author

By Tom Linder
Headshot of Elisabeth Hildt, professor of philosophy and director of the Center for the Study of Ethics in the Professions

Whether it鈥檚 on television via shows such as Grey鈥檚 Anatomy and House or in the real world, medical mysteries have long piqued our interest as a society. Imagine the plot twist if the titular character in House鈥攕tumped in his pursuit of a miracle diagnosis鈥攕imply turned to an artificial intelligence tool that relays the correct answer.

That scenario is closer to reality than you may think: AI tools are already showing up in medical contexts. For Elisabeth Hildt, professor of philosophy and director of the Center for the Study of Ethics in the Professions at 91制片厂 Tech, the concept of explainability becomes particularly important when it comes to figuring out exactly what role these AI tools should have.

What exactly is explainability?

鈥淓xplainability is where the tool provides some sort of explanation about how it came up with its output,鈥 explains Hildt. 鈥淭his is particularly relevant in medicine when its about medical decisions, when there is a medical doctor getting support from an AI tool. Often, these are called clinical decision support systems (CDSS) that are designed to support medical doctors and clinicians in their decision-making.鈥

In her paper, titled 鈥溾 and published in Bioengineering, Hildt explores the concept of explainability in medical AI applications, focusing on CDSS.

She presents four cases of AI-based CDSS, each with a different type of explanation. One lacked explainability altogether, a second had only post-hoc (after it happened) explanations, another was a hybrid model that provided medical knowledge-based explanations, and the fourth was a causal model that involved explanations around complex moral decision-making.

Hildt鈥檚 study revealed that the role of explainability in CDSS varies considerably. One of the biggest issues鈥攑articularly in black-box CDSS that provide no explanation鈥攊s trust.

If the AI鈥檚 reasoning is as secretive and guarded as a doctor on a TV show鈥檚 thought process, patients will struggle to trust it.

鈥泪迟s difficult for clinicians and medical doctors to make autonomous decisions, and to take responsibility and to be held accountable for their medical decision-making if its based on a black-box tool,鈥 says Hildt, raising questions about legal liabilities.

While black-box AI tools such as neural networks, the inner working of which is not intelligible to human users, may deliver more accurate results, they are also least likely to be trusted by medical professionals and patients. The reason is that users cannot understand how the tool came up with its output.

鈥淭丑补迟s a dilemma,鈥 says Hildt. 鈥淎nd there are people who say, 鈥楳aybe we don鈥檛 need explainability; maybe a high level of accuracy is enough.鈥欌

She continues, 鈥淚 could imagine this is true in contexts that are low-risk, that are standard applications, or in cases where people have been familiar with the tool for a long time and know that it鈥檚 accurate鈥.I think when tools are introduced in medicine and the clinical validity is not proven, people don鈥檛 know whether they can trust them. Then, it鈥檚 good to have some sort of explanation.鈥

Hildt points out that there鈥檚 still a lot of research that needs to be done in order to make sure that the field can trust CDSS moving forward.

鈥淔rom an ethics perspective, I think whats really needed is empirical data,鈥 says Hildt, raising concerns about privacy, trust, data protection, and informed consent, among other ethical questions. 鈥淲hats really needed are interview studies with medical doctors and with patients about their perspectives on AI-based tools and the role of explainability. What type of explanation is needed? What are the benefits? What are your concerns? How does it influence doctor-patient relationships? How does this change the way doctors and patients can make decisions about treatments?鈥

While these answers aren鈥檛 immediately apparent, Hildt is confident that the questions can lead researchers in the right direction. If explainability can help improve the quality of health care, support medical decision-making, increase trust, and facilitate doctor-patient communication, it鈥檚 likely here to stay.