Researchers at the Department of Defense are undertaking a significant task to better explain the decision-making processes within artificial intelligence, which could have significant implications for the healthcare industry as it explores machine learning as a diagnostic tool.
AI’s biggest advantage—the ability to mimic human learning through neural networks that can process vast quantities of information—is also its biggest flaw. As those networks become more advanced, the technology’s decision-making process becomes harder to understand, creating a black box of deep learning.
RELATED: Using AI for diagnosis raises tricky questions about errors
That dynamic raises numerous questions in the medical field, where technology companies like IBM and hospital systems are looking at ways AI can improve diagnosis while also grappling with trust and transparency.
A group of 100 researchers coordinated through the Defense Advanced Research Projects Agency is trying to peek inside that black box as a way to solidify trust in AI systems, according to The Wall Street Journal. The 3-year, $75 million effort aims to produce new transparent AI techniques and interfaces available for commercial use.
RELATED: Plenty of buzz for AI in healthcare, but are any systems actually using it?
Although the research group’s focus is not limited to healthcare, its findings could shape the way the technology is used in hospitals and medical clinics moving forward. In an editorial published in the Journal of the American Medical Association this week, Italian researchers outlined some of the unintended consequences of AI in healthcare, including the concerns about the black box of advanced neural networks that prohibit physicians from understanding a computer-generated medical diagnosis or recommendations. Stanford University researchers have also called on the healthcare industry to temper expectations of AI with a more realistic discussion about its impact.
According to David Gunning, a DOD program manager overseeing the research effort, unexplainable AI systems will ultimately be cast aside in many industries.
“What I think could really happen is they just don’t get implemented, or people won’t trust them enough to use them,” he told the WSJ.