CHAI floats draft framework to certify AI assurance labs as oversight efforts move forward

The Coalition for Health AI (CHAI) announced the completion of two draft documents on Friday that may have a big effect on the future of AI in healthcare.

CHAI champions the creation of a national network of independent assurance labs for healthcare AI. CHAI will share with their members a draft framework for certifying the future assurance labs. It also has completed a draft of CHAI’s version of an AI model card, sometimes known as an AI "nutrition label."

“I think what it establishes, for the first time in the health industry, is a minimum threshold around disclosure and transparency for AI models,” Brian Anderson, M.D., CEO of CHAI, told Fierce Healthcare.

This comes as the deputy secretary for the Department of Health and Human Services (HHS), Andrea Palm, mistakenly announced that HHS will be “delivering” a network of AI assurance labs at Health Datapalooza on September 17.

“We're going to double down on our partnerships with industry on … delivering a national network of assurance labs that honor our quality assurance framework for ensuring that models of AI and algorithms are safe and effective,” Palm said.

All of the AI assurance labs will undergo a certification process to ensure they meet appropriate integrity requirements. Anderson told Fierce Healthcare in September that 32 sites are in the running to be certified. CHAI will announce the first two assurance labs by the end of the year, he said.

Chief among the requirements for certified assurance labs are divulging any conflict of interest and requiring data quality and integrity standards that CHAI gleaned from the Food and Drug Administration (FDA). Another important feature of the labs is to protect the intellectual property of the models.

“Assurance labs that can clearly articulate and transparently share the kinds of testing data, and making that testing data very representative for any given health system as a customer is going to be a very powerful capability across this network,” Anderson said. “And so ensuring that the certification framework has requirements in it around transparency on the data, quality, the different kinds of features that might go into creating that data set are going to be really important if we want to build the kind of trust in these labs.”

Anderson said the draft certification framework was written in conjunction with the ANSI National Accreditation Board and meets ISO 17025.

Anderson predicts that after reviewing the draft documents, CHAI members may want to double down on metrics and how the assurance labs can be standardized so that model developers can’t select a certain assurance lab to get the best result.

CHAI’s draft model card for AI models meets the requirements for the Assistant Secretary for Technology Policy (ASTP)/Office of the National Coordinator for Health IT's (ONC') final rule, HTI-1. Per HTI-1, AI developers must put an AI nutrition label on their models but only when they are being deployed through certified electronic health records (EHRs). Algorithms that meet the Decision Support Intervention criteria through ASTP’s certified EHR program must comply with the new requirements by Jan. 1, 2025.

HTI-1 requires the model cards to state the source of funding, intended use, intended patient population and known risks of the model, among a long list of other requirements. When ASTP released HTI-1, it did not prescribe how the model card should look.

Now, CHAI is ready to offer one.

CHAI’s model card can be used for all health AI developers, not just those under the purvey of ONC, which is limited. The CHAI model card also demands more technical specificity than the general requirements laid out by HTI-1, Anderson said.

Because of this, Anderson said the model card could be used to inform future developments of HTI-1. Micky Tripathi, Ph.D., assistant secretary for technology policy and acting chief artificial intelligence officer at HHS, "went on record saying that industry hasn't come to consensus yet, and this is our first stab at industry coming to consensus,” Anderson said.

Anderson said that CHAI has not yet figured out how to make the post-deployment monitoring of AI equitable for lower resourced health systems.

“The post deployment monitoring part of the framework … deeply concerns me, because I don't want a reinforcement of the digital divide as the AI revolution continues,” Anderson said.

The CHAI membership will review the draft documents at the CHAI Global Summit held at the HLTH 2024 conference in Las Vegas later this week. The documents are expected to become public in April 2025.