More clinical evidence needed to accelerate adoption of AI-enabled decision support: report

Artificial intelligence-enabled clinical decision support (CDS) has the potential to equip clinicians with the actionable information they need to enhance overall health and improve outcomes. However, regulatory issues, improved product labeling and patient privacy concerns need to be addressed before AI is safely and widely adopted.

In a recent report (PDF), a working group at the Duke-Margolis Center for Health Policy examined the potential benefits and challenges when AI is incorporated into CDS software, particularly software that supports improved clinical diagnosis, as well as barriers that may be preventing development and adoption of this software.

Improved CDS could be useful in reducing diagnostic errors, the Duke-Margolis team noted, as diagnostic errors account for almost 60% of all medical errors and an estimated 40,000 to 80,000 deaths each year, according to the National Academies of Sciences, which also estimates that “nearly every American will experience a diagnostic error in their lifetime, sometimes with devastating consequences.”

AI-enabled diagnostic support software—a subset of CDS software—has the potential to augment clinicians’ intelligence, support their decision-making processes, help them arrive at the correct diagnosis faster, reduce unnecessary testing and treatments otherwise resulting from misdiagnosis, and reduce pain and suffering by starting treatments earlier, the working group wrote.

RELATED: FDA launches new tool aimed at safe deployment of AI in healthcare

Several key issues are delaying innovation and adoption of AI-enabled diagnostic support software that stakeholders will need to address, the researchers wrote, including the need to demonstrate the value of AI-enabled software to provider systems. Developers will need to show clinical and economic evidence using data from a population representative of the health system, according to the working group.

“This evidence will include the effect of the software on patient outcomes, care quality, total costs of care, and workflow; the usability of the software and its effectiveness at delivering the right information in a way that clinicians find useful and trustworthy; and the potential for reimbursement for use of these products by payers,” the researchers wrote.

Clinicians also need effective patient risk assessment of these products, as developers’ ability to explain how the software works and how the algorithms have been trained will significantly impact how regulators and clinicians view the risk to patients. Product labeling may need to be reconsidered and the risks and benefits of continuous-learning versus locked models must be discussed, the working group noted.

RELATED: AI beats humans at identifying cervical cancer in NIH study

Overly technical descriptions of the software will likely be of little to no value to clinicians using it.

“Just as clinicians know that a particular drug does not always work, they will need to understand that AI-enabled software will not be perfect, or equally dependable in every instance. Clinical utility will only be realized if users are able to understand, trust, and manage AI technologies,” the researchers wrote.

From a regulatory perspective, the U.S. Food and Drug Administration (FDA) needs to provide more clarity on whether it will examine a product's "explainability" or "interpretability" when evaluating in the AI-enabled Software as a Medical Device category, both in the current regulatory environment and in the precertification pathway, according to the researchers.

Earlier this month, the FDA provided its first guidance aimed at safe deployment of AI in healthcare. The FDA released model 1.0 of its software precertification pilot Jan. 8 to provide an initial tool to test these programs. The new program aims to provide companies with an optional "Excellence Appraisal" and thereby give them a "pre-check" approval. This would support the company's ability to gain approval for new applications in the future.

The pilot will start testing the safety of AI applications during the first half of the year, and the FDA plans to collect public comments on the plan by March 8.

Once a product has entered the market, FDA has historically sought to address device-specific safety issues through labeling requirements. AI-enabled software, particularly software that uses continuously learning algorithms, will challenge this existing regulatory paradigm and raise the question of whether the current paradigm for labeling is the correct approach for AI-enabled software, the working group wrote. “The relative newness of AI-enabled [Software as a Medical Device] likely means that the effectiveness of approaches to labeling will need to be evaluated and should evolve over time. A new risk framework or labeling classification system may need to be created in order to better define a process that correctly depicts algorithmic safety and efficacy,” they wrote in the report.

RELATED: Health IT stakeholders want more from FDA’s clinical decision support guidance

Payer coverage and reimbursement to provider systems for the use of AI-enabled CDS will also drive adoption and increase the return on investment for these technologies, the researchers noted. Clarity is needed around use cases for diagnostic support software that public and private payers would consider appropriate for coverage. “Costs involved with using the software would ideally be low compared to the savings the product produces. However, models could include savings from using the software to automate treatment approvals, saving timing on writing and reviewing authorizations,” the working group wrote.

The working group also highlighted the importance of ensuring that AI systems are ethically trained and flexible. “Best practices to mitigate bias that may be introduced by the training data used to develop software are critical to ensuring that software developed with data-driven AI methods do not perpetuate or exacerbate existing clinical biases,” the researchers wrote.

In addition, best practices and, potentially, new paradigms are needed for how to best protect patient privacy. The working group recommended several potential solutions, including increased security standards, establishing certified third-party data holders and regulatory limits on downstream uses of data. Industry stakeholders could also promote better stewardship of data by establishing a national framework that helps guide companies using personal data, the working group wrote.