Using AI for diagnosis raises tricky questions about errors

IBM Watson
Would IBM take the fall for a diagnostic error by Watson?

Artificial intelligence is changing the way physicians treat patients by using massive amounts of data to make faster and more accurate medical diagnoses.

But what happens when the machine is wrong?

That inevitable scenario raises a host of thorny questions, according to an article in Quartz that looks at who might take the blame for a computer’s mistake.

RELATED: Increasingly powerful AI systems are accompanied by an 'unanswerable' question

AI’s foray into the medical field has already raised questions about how it’s used, but it could also further complicate the aftermath of a medical error—the third-leading cause of death in the U.S. according to some researchers. Increasingly, hospitals are adopting a more open and honest approach to medical errors by allowing clinicians to own up to a mistake and discuss what happened with the patient, which has resulted in fewer malpractice lawsuits.

But hospitals are also investing heavily in AI for the very purpose of improving care. Last week, Partners HealthCare announced a 10-year deal with GE Healthcare to develop and commercialize AI platforms specifically for the healthcare industry.  

The source of medical errors can be notoriously difficult to pin down, although physicians usually take the brunt of the legal ramifications. Throwing AI into the mix raises questions about who is responsible for the machine’s diagnosis and whether that blame will transfer to the AI system itself, the designers of the system or the company that owns the system.

RELATED: New clinical decision support software guidelines highlight keys to self-regulation

Further complicating matters is the fact that as AI becomes more powerful, its decision-making process also becomes harder to understand. Although new guidelines are looking to address the use of clinical decisions support systems—including the notion that the clinician is the ultimate decision-maker—future AI integration may have to account for the legal ramifications of getting a diagnosis wrong.