As artificial intelligence rapidly makes inroads in healthcare, federal agencies already have the authority to regulate AI at the hospital bedside, according to some healthcare researchers.
Many groups have called for new regulation of AI in healthcare. But, researchers at the University of Pennsylvania and Duke University School of Medicine contend in a recent JAMA Health Forum viewpoint article that the Department of Health and Human Services (HHS) and the Centers for Medicare & Medicaid Services (CMS) should use the existing mandates in the Medicare and Medicaid Conditions of Participation (CoPs) to oversee AI safety in hospitals.
Medicare’s CoPs set health and safety standards for healthcare organizations and are designed to protect patient health and safety regardless of healthcare intervention or technology.
While Congress and federal agencies are currently in talks about regulating the use of AI in healthcare, Lee A. Fleisher, M.D., anesthesiologist at University of Pennsylvania, and Nicoleta Economou-Zavlanos, Ph.D., director of Algorithm-Based Clinical Decision Support Oversight at Duke AI Health, argue that CoPs already allow CMS to regulate AI at the bedside.
“Although there is currently no separate statutory authority to regulate AI in clinical care, we believe that the CoPs for hospitals already require them to develop policies and procedures related to the use of AI in their organizations detailing the qualifications and responsibilities of end users and those involved in monitoring safety issues when AI is used,” the researchers wrote in the article.
Algorithms may be less accurate for some patients if the models weren’t trained on demographic subpopulations like minority race and ethnic groups. The algorithms then run a risk that they won't flag important information for providers.
Or, if clinicians have improper training on models like clinical decision support, and don’t understand how to properly use them, patients can be harmed, the paper says.
Through CoPs, the authors say CMS has the authority to investigate hospitals’ patient safety practices, including in the cases of unexpected deaths, errors and serious injuries. CMS and accrediting agencies can investigate reports for abuse, neglect or noncompliance with health and safety standards. CMS can mandate the creation of remedial action plans if it finds that patient safety has been compromised. This is likewise the case with healthcare AI.
Fleisher and Economou also contend that adverse safety outcomes and benign errors should be reported to the hospital and to the manufacturer or developer of the model.
In cases where models are considered medical devices and cleared by the Food and Drug Administration, adverse events should be reported to the drug and device regulator, they say. But there are many algorithms that fall outside of the scope of FDA purview, including algorithms that are developed within health systems and are not sold commercially.
Industry, and likely the federal government, need to think about the mechanism to report safety incidents to if not the FDA, the authors say.
"It is essential for CMS and the HHS to leverage their existing authority under the CoPs to ensure the safe implementation of AI in hospitals, leaving the assessment of algorithms or tools to the FDA and other bodies yet to be fully defined," the authors wrote.