Most doctors, consumers need transparency to trust generative AI in clinical decisions, survey finds

Though physicians are quickly growing more comfortable with generative AI, the vast majority expect models to be transparent and trained on credible information, a new survey found.

The Wolters Kluwer Health survey reached 100 physicians in February. Seeking to understand how doctors view generative AI, it found a rapid shift in acceptance of the technology. Sixty-eight percent said they have changed their views on generative AI over the past year, and 40% said they are ready to use it this year at the point of care.

Clinicians and consumers alike have had time to get acquainted with generative AI and to experiment with some tools, Peter Bonis, M.D., chief medical officer at Wolters Kluwer Health, said.

“I think there’s been just the fastest adoption of any technology platform perhaps in history with these large language models, and of course this captured the imagination,” Bonis told Fierce Healthcare.

At the same time, consumers are less confident than clinicians in the tech. A separate survey of 1,000 consumers by Wolters Kluwer Health, from November 2023, found 4 in 5 consumers said they’d be concerned about the use of generative AI in a diagnosis. Conversely, only 1 in 5 docs believed patients would be concerned.

“It’s extremely fluid and fluent in the responses. It maintains context and it’s very compelling … but it can also be dead wrong,” Bonis noted of generative AI.

For that reason, both physicians and consumers agree guardrails are critical. The vast majority of docs (91%) said they need to know that the data used to train the tools were created by doctors and medical experts before using them in clinical decisions. Nearly the same amount need vendors to be transparent about where information came from, who created it and how it was sourced.

Similarly, nine out of 10 consumers said clinicians need to be clear and transparent about the use of generative AI in healthcare. And three-quarters of docs would be more comfortable knowing the tool came from an established vendor in the sector.

When asked about generative AI applications, most physicians agreed it could improve care team interactions with patients. More than half believed it can save them 20% or more time, while two-thirds said it can save them time by quickly searching medical literature. Many also recognized it can save time by summarizing data about patients in the EHR.

Consumers, too, see potential benefits. Nearly half said generative AI can help improve healthcare by reading medical tests or images more thoroughly, while more than 4 in 10 said they would use the tech as a resource for follow-up questions.

Over a third of surveyed physicians said there are no guidelines at their organizations about using generative AI. Almost half said they don’t know of any guidelines. 

UpToDate, Wolters Kluwers' point-of-care medical resource software platform, has explicit guidance that prohibits the creation of content using generative AI, Bonis said. Additional guidance is beginning to come together from regulators and academia.

“The many other shoes are going to drop for creating a regulatory framework for how these applications should be used in the healthcare arena,” Bonis said.

While Wolters Kluwer has been leveraging AI tech for years, from identifying diversion of medications to helping nursing students with personalized learning, it is cautiously approaching generative AI, according to Bonis. In late 2023, UpToDate launched AI Labs, a capability that allows the company to collaborate with users to develop new capabilities and uses of generative AI to enhance clinical decisions.

When it comes to including clinicians in generative AI development, “it’s absolutely critical,” Bonis said. “It is imperative that you understand workflow … try to alleviate the burden that clinicians now face and to do so in the spirit of trying to advance care,” he said.