After IBM intensely lobbied for AI deregulation in 21st Century Cures, the FDA will determine its fate

For several years, IBM has been stepping up lobbying efforts to ensure that its artificial intelligence software is protected from federal regulation, and it’s continuing those efforts as the Food and Drug Administration prepares to issue important guidance on clinical decision support (CDS) software next year.

In the buildup to the passage of the 21st Century Cures Act, the company pressured lawmakers to ensure that Watson would not be regulated as a medical device by the FDA, according to Stat. The company spent $26.4 million on lobbying between 2013 and June 2017, although IBM says a fraction of that went toward health software regulation.

The law ultimately carved out some exemptions for medical technology, but it didn’t offer the foolproof immunity IBM had hoped for. Software that analyzes medical information and provides clinicians with recommendations about diagnosis or medical treatment is exempt under the law, but only if the healthcare professional can review the basis of the information.

RELATED: Increasingly powerful AI systems are accompanied by an 'unanswerable' question

AI experts have pointed out that the algorithms that make AI software so powerful are accompanied by a “black box,” making it difficult to understand the machine’s decision-making process.

In its Digital Health Innovation Action Plan, the FDA said it plans to issue guidance on CDS software in the first quarter of 2018. Stat reports that eight IBM employees are registered to lobby on the issue of CDS regulation, in part through a new group of lawmakers known as the AI Caucus.

RELATED: The FDA is finalizing a new regulatory plan for digital health and the industry is thrilled

CDS proponents have pushed for self-regulation. In September, the Clinical Decision Support Coalition released final guidance for CDS software that calls on developers to ensure their products are transparent and provide sufficient time for physicians to reflect on the recommendations. But some physicians are wary of the impact unregulated AI software could have on patient care.

“Until we understand the basis for the AI and how the algorithms work, there really needs to be that third-party check and transparency,” Reshma Ramachandran, M.D., co-chair of the FDA Task Force at the National Physicians Alliance, told Stat. “We want to make sure there’s no harm done at the end of the road.”