In the absence of a federal framework to monitor the impact of artificial intelligence in the clinic, the Coalition for Health AI (CHAI) is stepping in on post-deployment oversight.
The Food and Drug Administration (FDA) lacks the capability to assess how models are performing in the real world after they are authorized for use by the agency. The failure to monitor AI products in the post-deployment phase has been a major hurdle for the industry to adopt AI.
In an interview with Fierce Healthcare, CHAI CEO Brian Anderson, M.D., revealed that the group will be adding a new post-deployment monitoring feature to its national model card registry through which health systems will share the results of agreed-upon metrics for evaluating the performance of AI in local contexts.
Because third-party evaluations of AI models are not yet standard practice in healthcare, health systems often don’t have guarantees that models they purchase will work as well on their population as vendors say it will. AI applications can also degrade in performance over time, which also necessitates ongoing monitoring.
In the CHAI’s national model registry, health systems will report on how a particular model is performing using the same language and metrics as other health systems, which will help make standard comparisons.
“There's a real lack of understanding about how these models perform locally [and] the potential variance in their performance from one institution to another,” Anderson explained. “There's no place, yet, anywhere, where health systems can come together to share that information and begin to have common metrics, common ways of evaluating, so you can really get to an apples to apples comparison of how these models actually work.”
The CHAI is still in the process of building the model card registry in collaboration with Avanade. Once the model card repository is completed, health systems will be able to get a basic understanding of a model’s training data, fairness metrics and intended use. Inclusion on the registry equates to a CHAI “stamp of approval” for AI vendors that have correctly filled out a CHAI model card.
As the model registry is being built out, the CHAI is also weighing options for how to incorporate post-deployment monitoring results.
The founding health systems for the national registry are the Cleveland Clinic, Kaiser Permanente, Memorial Sloan Kettering, Mercy, the Mount Sinai Health System, Providence, the Rush University System for Health, Sharp HealthCare, Stanford Medicine, UMass Memorial and the University of Texas Health System. Through CHAI work groups, the systems are deciding on standard metrics which they will soon provide on the registry for public use.
Anderson participated in an onstage conversation at the POLITICO 2025 Health Care Summit last week with California Democrat Rep. Ami Bera, M.D., and Paragon Health Institute fellow Kev Coleman. All three leading AI thinkers stated the benefit of public-private partnerships for AI oversight.
The three discussed the limited capacity of the FDA to monitor the performance of healthcare AI applications once they are in use at clinics. Bera and Coleman stated that they don’t believe there is a statutory deficiency for the FDA to regulate AI. Bera suggested the agency may need to revamp its approach to accommodate the generative technology.
“I think they've got the authorities," Bera said. "How they interpret and utilize those authorities, they may have to modernize … but I think again, we can work with the administration in this particular context to provide that guidance."
Bera said he hopes that House leadership revives an AI select subcommittee to continue to hash out issues.
Coleman said the FDA needs to adapt to the realities of AI. “FDA as an oversight agency is constantly having to evolve to the material realities of the marketplace,” Coleman said. “This is just one more instance of this. “
It’s not yet clear how the Trump administration will approach healthcare AI policy, but Anderson told Fierce Healthcare it has received interest from Trump administration officials to participate in its working groups to see how the healthcare industry is deploying AI.
“I've received nothing but positive support and interest in engaging across our working groups," Anderson said. "So we're really excited to be taking those next steps with the new administration."
The CHAI has also had meetings on Capitol Hill about the need to bolster the U.S. infrastructure for healthcare AI governance and monitoring. Some of the CHAI’s quality assurance labs, which the organization is now referring to as quality assurance resources, will have the capability to help lower-resourced healthcare providers perform post-deployment monitoring of AI.