CHAI releases draft framework of quality assurance standards for healthcare AI

The Coalition for Health AI (CHAI) has released a concrete proposal for testing the quality of healthcare AI. They seek to create industry-wide agreement on good, responsible healthcare AI – a feat that will only become more complicated with the addition of public feedback. 

Its draft Responsible AI Framework includes an Assurance Standards Guide and a set of checklists to be used both in the eventual assessment of AI in CHAI assurance labs and to be available open access for companies to internally evaluate the quality of their AI products. The framework also will be available for third-party independent quality assurance assessors outside of CHAI to use as a goalpost to review AI. 

“It's not easy to build a common agreement on a lot of these things,” Brian Anderson, CEO and co-founder of CHAI told Fierce Healthcare. “And I will readily admit to you that as we get greater technical clarity and specificity in some of these areas, it may get increasingly more difficult to build.” 

The Assurance Standards Guide details six cases of the application of AI in healthcare and how parties along the lifecycle can ensure quality and reduce potential negative outcomes. 

The technical specifications that the Guide offers are written to speak to each group that will use them, from data scientists involved in development to health system Chief Information Officers, to support staff who may be putting the algorithm in front of the patient. 

The six use cases include a predictive electronic health record risk use case, an imaging diagnostics use case, a generative AI use case, a claims-based outpatient use case, a prior authorization with medical coding use case and a genomics use case. 

CHAI used a consensus-based approach to create the draft framework, and it involved over a hundred individuals in its eight-month creation. Six of CHAI’s working groups reviewed the framework as did a team of independent reviewers, listed in the draft framework. 

The public will have 60 days to comment on the draft framework through CHAI’s website. After the public comment period ends, CHAI will review and incorporate feedback into a final framework. It plans to review additional feedback at least twice a year for the next one to two years as organizations begin to implement the framework. 

The framework will be iterative, Anderson said. “We don’t want it to go stale,” he said.  

Anderson said the public comment process was intended to reflect the opportunity for the public to comment on government rulemaking. 

“We want to ensure that the public has the chance to ponder at their perspective, that as many stakeholders have a chance to contribute to build some trustworthy, responsible AI that has a common, agreed-upon definition … It cannot be something that is just done in a closed-door set of working groups,” Anderson said. 

He clarified that the framework is not intended to act as government regulation, but rather to create a consensus definition of responsible AI. 

Anderson stressed that CHAI considers local validation of AI and external validation as equally important aspects of quality testing. The draft framework also aims to equip lower-resourced health systems with consensus-based standards for ensuring the quality of algorithms intended for use in healthcare. All testing of AI through the federated network of assurance labs will be conflict-of-interest free, he added.

There is an ongoing debate in the healthcare industry, and among lawmakers, about how to develop guardrails around the use of AI in healthcare. Recently, Republican lawmakers criticized the Food and Drug Administration's relationship with CHAI to create a network of laboratories to test AI products.

STAT reported that Republican lawmakers sent a letter to the FDA cautioning that a relationship with CHAI represents a "conflict of interest" as technology companies such as Microsoft and Google are members of CHAI.