Technology giants like IBM, General Electric and Google have been eager to capitalize on AI advancements that could improve medical care. But for AI to make its mark, it will have to overcome a fundamental flaw that has clung to healthcare for decades: access to patient data.
Access to data is just one piece of the puzzle. Machine learning tools also need to be fed data that differentiates right answers from wrong answers. For particularly complex conditions, that kind of easily digestible data might not exist.
"In a specialized domain in medicine, you might need experts trained for decades to properly label the information you feed to the computer," Thomas Fuchs, a computational pathologist at Memorial Sloan-Kettering, told MIT Technology Review.
Researchers have raised this issue before, with some arguing the next generation of machine learning software needs to capture a “richer clinical picture” by tapping into physician-generated crowdsourced data. Data scientists have also traced AI’s practical shortcomings to an inability to access robust data, like social determinants of health, from EHRs.
Office of the National Coordinator for Health IT's Deputy National Coordinator Jon White, M.D., tweeted from an event hosted by SMART Health IT, pointing to the potential complications of using bad data for machine learning, and hinting that improving datasets may be a role the government can take on.
Or, maybe we need to reimagine or reconfigure our datasets in health care. #AllTheWorldsAPixel— Jon White (@pjonwhite) June 26, 2017
As Manish Kohli, a healthcare informatics expert with the Cleveland Clinic, told MIT Technology Review, “healthcare has been an embarrassingly late adopter of technology.” That means companies like IBM— which is staking its reputation behind Watson—GE and Google are partnering with healthcare systems to gain access to and begin sifting through patient data.