Providers, vendors building health AI have 'shared responsibility' to ensure safe adoption industrywide

BOSTON—There’s no shortage of providers and vendors working to develop and deploy artificial intelligence in healthcare—but not all who are delivering care are in a position to make use of these novel technologies without risking harm.

Industry groups like The Coalition for Health AI (CHAI) have come together in recent months to outline guidance for the responsible use of AI. These types of frameworks span pre-deployment, implementation and post-deployment, and outline criteria for developers and end users that address concerns like performance drift, safety and bias.

However, to ensure AI makes a positive difference for patients—or even becomes trusted enough for use by the healthcare workforce—the onus is on third-party developers and the health systems building their tools from scratch to support provider organizations who lack the tech expertise to meet these guidelines, healthcare leaders explained Thursday at the HIMSS 2024 AI in Healthcare Forum.

“How do we empower those geographies and those areas to deliver those AI solutions safely? Sometimes that’s going to fall on health systems and other groups; sometimes and oftentimes it should fall on the solution developers."  —  Sonya Makhni, M.D., medical director for the Mayo Clinic Platform

“We get the opportunity to hear [from] these health systems that have varying levels of expertise—what their questions are, what they’re worried about,” Sonya Makhni, M.D., medical director for the Mayo Clinic Platform, said during a discussion on AI adoption. “Oftentimes, they have those same problems: I don’t have a data scientist or informaticist to tell me what I need to look out for.

“How do we empower those geographies and those areas to deliver those AI solutions safely? Sometimes that’s going to fall on health systems and other groups; sometimes and oftentimes it should fall on the solution developers. I think we have to have a shared responsibility, where healthcare systems need to know what to ask, [they] need to know what information is relevant, they need to expect that information—and our solution developers need to meet that,” she said.

Peter Bonis, M.D., chief medical officer for Wolters Kluwer Health, agreed with Makhni. He noted that, in practicality, large platform vendors such as electronic health record makers’ ability to weave these applications into their offerings will be a key factor in provider adoption. As algorithms become easier to develop and more third parties petition to be included, those vendors will need to be thoughtful about the partner offerings they’re integrating into their products.

“It’s really separating the wheat from the chaff,” he said during the session, “having enough structure in place to make sure that these things are safe and effective, that they are deferential to existing workflows and to have governance processes.”

Speaking to Fierce Healthcare following the panel, Bonis noted that, from the application developer’s position, some of that demand for ease of use and clarity is built into the sales process. Successful vendors will already be communicating a “plausible, credible and measurable case that you deliver clinical—and ideally economic—value” to those in the provider organization making a purchasing decision, “or they’re never going to prioritize it.” 

User-centered design, monitoring across ‘the entire AI lifecycle’

As important as usability and ease of deployment may be for the industry’s under-resourced adoptees, the panelists stressed the need for health AI stakeholders to play a role in ensuring accuracy and maintaining trust.

There’s no “one size fits all” for healthcare AI, Makhni said, as different clinical and nonclinical AI applications have unique risks or populations who could be impacted.

Vendor and health system developers need to keep in mind “the entire AI lifecycle,” she said, which includes taking a multidisciplinary, user-centered look at what problems are being solved, validating a solution to make sure that performance doesn’t deteriorate over time or across different data sets, “And then, of course, communicating that information transparently, in a way that empowers the end user.”

“At Mayo Clinic Platform there is a program called Solution Studio, where AI solutions come in, we meet them where they are in their journey, and through a collaborative approach where we have clinical expertise … work with the solution developer, stakeholders and end users [and] ensure that all these pieces are coming together cohesively," she said.

Bonis described a similar focus on user-centric design and accuracy for Wolters Kluwers’ health AI products. The company’s development process brings together engineers, content makers, clinicians and other provider end users “sort of sitting around and looking at every single dimension of information retrieval.”

To balance usability with the danger of miscontextualization, he said the vendor’s developers work to introduce “speed bumps” during the workflow that prompt an end user to specify contextually necessary information before giving its output—for instance, a clarification of whether a patient is pregnant before recommending treatments that could affect a fetus. 

From there, there’s a long path of internal and then limited external tests where the company monitors an AI product’s safety, usability and other performance measures before a broader release, he said.

“So we very much follow traditional product principles,” he said, “but considering the stakes are so high for clinical care, we really double down [and] make sure that these things are tested as objectively and rigorously as possible.”

Speaking after the session, Bonis also noted that regulatory compliance is a consideration for vendors.

“But I don’t think that’s sufficient,” he said. “You have to go beyond that. … I don’t want to pick on any particular vendor, because anyone has their faults, but you saw this in the area of sepsis predictions where some of the algorithms which were being advanced and were already made clinically available did not perform well and, presumably, that led to adverse patient outcomes.

“So it’s an ecosystem, and the vendor’s responsibility has to be part of it because there’s no way that you can cross every ‘t’ and dot every ‘i’ as an external regulator and make sure that there’s nothing hidden that’s going to cause trouble.”