Epic plans to launch AI validation software for healthcare organizations to test, monitor models

As artificial intelligence rapidly advances in healthcare, the industry is grappling with how to evaluate AI models for accuracy and performance and monitor the technology for any downstream adverse outcomes.

EHR giant Epic plans to release an AI validation software suite to enable healthcare organizations to evaluate AI models at the local level and monitor those systems over time, Seth Hain, Epic senior vice president of R&D, said in an exclusive interview.

Epic has developed an AI software suite, what the company calls an "AI trust and assurance software suite," that automates data collection and mapping to provide near real-time metrics and analysis on AI models. The automation creates consistency and eliminates the need for healthcare organization data scientists to do their own data mapping—the most time-consuming aspect of validation, according to Hain.

The key is to enable AI testing and validation at a local level and allow ongoing monitoring at scale, Hain noted.

"We'll provide health systems with the ability to combine their local information about the outcomes around their workflows, alongside the information about the AI models that they're using, and they will be able to use that both for evaluation and then importantly, ongoing monitoring of those models in their local contexts," Hain said during the interview.

The company is putting its hefty weight behind the idea that AI validation standards should be tested on local patient populations and include ongoing monitoring.

A critical access hospital in rural Nebraska sees a different mix of patients and has different workflows than a dedicated cancer center in New York City, experts point out. And a critical access hospital in one rural part of the country will have a different patient population than a hospital in another rural area. Using validation standards, healthcare organizations will need the ability to run AI validation in the EHR on their own patient populations and workflows, Epic executives said.

Epic plans to release that capability in the next four to six weeks with ongoing updates throughout the summer.

Hain said the AI software suite includes intuitive reporting dashboards that are updated automatically. Users get analysis broken down by age, sex, race/ethnicity and other demographics.

The software features a common monitoring template and data schema to make it easier to extend the suite to new AI models in the future, he noted.

The health IT company also plans to make the suite’s monitoring template and data schema publicly available when it is released this summer to enable healthcare organizations to use the software to monitor their own custom AI models as well as AI technology used from other third-party vendors. 

As AI best practices are developed, the open-source framework will enable organizations to bring in those standards and practices alongside the AI validation capabilities.

"As organizations and bodies build out best practices, we want to make sure that the health system can use this tool set alongside with those in mind to be able to analyze and understand their circumstances," he said. "At scale, [the suite] opens the opportunity for health systems to be able to understand outcomes alongside AI, and do so in a way that is open and flexible to the evolving best practices around that type of analysis."

Hain believes AI validation tools like the one Epic plans to release will help build trust in healthcare AI.

"When you build into the foundation of the applications, capabilities to bring alongside the evidence of what's happening on the ground, with what's happening with the technology, then you can move quickly with confidence. That's what is behind us building out this new AI trust and assurance suite is a series of capabilities that can evolve openly in regards to how an organization is doing that analysis and provide a framework for ongoing speed and confidence," he said.


How to test AI tools used in healthcare
 

Healthcare organizations have been using predictive AI models and machine learning for almost a decade. But, large language models and generative AI tools present a different challenge. Health systems are moving quickly to deploy LLMs and gen AI to tackle tasks like summarizing medical records and automating clinical note-taking. But these early adopters are still working through the best methods to validate AI models to feel confident about the technology's accuracy, performance and safety.

Many healthcare and health IT leaders see Epic's move as a step in the right direction to provide tools that enable local auditing of AI models.

Christopher Longhurst, M.D., chief medical officer and chief digital officer at UC San Diego (UCSD) Health, says the software suite Epic is developing will help the health system audit outcomes as it deploys AI models and will be incorporated into its AI governance process.

"The software suite allows an organization like us to look at outcomes of algorithms, whether they're vendor-supplied algorithms or homegrown algorithms, and we need to be testing that locally," he said in an interview. 

UCSD Health has taken the lead in deploying healthcare AI—it launched a pilot with Epic to test out its generative AI tool that drafts responses to patient messages, and it also hired a chief health AI officer.

The health system recently published a study on the impact of using an AI model in emergency departments to quickly identify patients at risk for sepsis infection. The study found the AI algorithm resulted in a 17% reduction in mortality.

“Twenty or 30 percent of the outcome we saw was because of the algorithm. The secret sauce, maybe 70% to 80%, was actually all of the local context," Longhurst noted. "The algorithm was built on the population we're actually serving. We did some process redesign to ensure that the alert went to a centralized code team, as well as to the front lines, along with the education of our clinicians in the emergency department. Our lived experience locally has been that the local context of implementation of these systems is actually the most important part when it comes to achieving the clinically meaningful outcomes that we all want.”

Aneesh Chopra, president of healthcare technology company CareJourney, says Epic's AI validation software is a step in the right direction to enable local AI model validation and supports health systems to focus on local outcomes as a result of using those AI technologies.

"Providers and health plan leaders should remain focused and accountable for the outcomes they deliver, whether those outcomes are informed by AI products and services or not. Organizations need to be committed to responsible AI use while focusing on the outcomes that are critical for the healthcare ecosystem," he said in an interview.

Epic is releasing the AI validation tool amid an active national dialogue about how best to validate AI and proposals to build out a national network of AI assurance labs to test algorithms. Prominent groups in health AI have supported the concept including the Coalition for Health AI, and it's supported by top regulators at the Office of the National Coordinator for Health IT and the director of the Digital Health Center of Excellence at the Food and Drug Administration. 

Longhurst contends that national AI assurance labs are "necessary but not sufficient."

"The only way to achieve outcomes is going to be with local workflows and local optimization. It's a shared responsibility where both the vendors and the local health systems where these vendor algorithms are deployed need to share the responsibility of the governance and oversight," he said. "The vendors need to be responsible for ensuring that any standard algorithms are developed with minimal bias on datasets that are representative, but even a representative dataset may or may not look like the population of patients that I serve at UC San Diego Health in safety net hospital, or that Mayo Clinic serves among a highly commercial population of second opinions. There also has to be that responsibility at the sharp end where health systems are monitoring the performance of both vendor algorithms and homegrown algorithms." 

Epic and many other health IT leaders also contend that a focus on local validation follows guidelines provided by the White House.

In its blueprint for an AI Bill of Rights, the Biden administration outlined its recommendations for AI testing and noted “testing conditions should mirror as closely as possible the conditions in which the AI will be deployed.” The Biden administration has also recommended that AI be monitored for adverse outcomes on an ongoing basis rather than a single, off-site evaluation.

"I think the need for greater transparency and local accountability is going to continue to evolve and [Epic's software] is an important step on a journey, where there will be many more steps," said Chopra, who also served as the first U.S. chief technology officer under President Barack Obama. "This is a natural evolution of the spark launched with the HITECH Act to really jumpstart this kind of data-driven chapter of healthcare delivery. This software suite is a proponent that will help contribute towards what I hope is a healthcare system that learns a lot faster and is capable of helping physicians, nurses and patients make better decisions at every step of their journey."

Longhurst believes the federal government should go a step further and incorporate responsible AI use into conditions of participation in Medicare. In the same way that hospitals and health systems are audited for quality and safety, organizations also should be required to meet compliance requirements around AI governance, he said.

"I think it would be a very straightforward update to the CMS Conditions of Participation, to suggest that health systems who are choosing to use AI in the delivery of care should have data governance committees tasked with oversight of the outcomes and reviewing them regularly," he said. "I believe that we as healthcare delivery organizations have a responsibility to our patients, not to just implement these algorithms and ignore them, but to monitor them closely like we do any other clinical pathway or clinical decision support process."

It's likely that many hospitals would push back on the idea of including AI use in Medicare conditions of participation, citing a lack of resources to fund governance committees. 

"I would say back to that, 'Well, then you shouldn't be doing AI alerts.' It is really the only way to audit to ensure that's being done appropriately, safely, trustworthy and respectfully, meeting the White House guidance," Longhurst said.