Three out of four U.S. patients don’t trust artificial intelligence in a healthcare setting, and most have no idea whether their provider is already using the technology, a new survey found.
The poll, led by Carta Healthcare, reached 1,027 U.S. adults in August. The company makes products aimed at streamlining admin tasks for providers. This was Carta’s first survey focused on AI.
Nearly 4 in 5 patients in the U.S. reported not knowing whether their provider is using AI. In reality, Carta executives said, all healthcare providers use AI and have been for a long time.
Everything from MRI machines to CT scanners to echocardiograms and more use AI, Matt Hollingsworth, co-founder and CEO of Carta, told Fierce Healthcare. “Right now, people are thinking AI is ChatGPT,” he said.
But as someone with a background as an AI engineer, Hollingsworth knows that the birth of large language models, like the kind underlying GPT, goes back decades.
“People have been using it, they just didn’t think of it in the same way,” he said. “AI is AI until people use it, and then it’s just technology.”
More than 40% of respondents to Carta’s survey acknowledged their understanding of AI is limited. At the same time, they are torn about whether they would be comfortable with it in the healthcare setting: Half say yes, half say no.
When asked whether they would be comfortable with AI if it would help improve diagnostic accuracy, 42% still said they would not be comfortable with it.
Hollingsworth suspects this is because people fundamentally don’t believe that AI can help with accuracy, not because they don’t want more accurate care.
“People feel like this is moving faster than it actually is,” he said. “This has been a long time coming.”
Nearly two-thirds are worried that the use of AI may lead to less face time with their healthcare provider, and more than a third don’t trust that their provider would be able to use it properly.
However, about two-thirds reported if they were to have an explanation of their providers’ AI use, it would make them more comfortable. Most (80%) said knowing about their practice’s AI use is important to improving their comfort level.
Can transparency do more harm than good?
Hollingsworth sees education as critical to reversing some of the public’s negative perceptions of AI in healthcare. People need to understand how AI works and the extent to which it is already embedded in healthcare, he said. But he also worries that since many are starting from a position of fear, once people learn how much AI is used in healthcare, they may begin to avoid care altogether.
Another component to widened understanding might be transparency from providers, Hollingsworth said. But he questioned what good it could do if it would confuse patients more than improve their understanding.
The technology underlying GPT has less capability to harm a person than a PET scan, Hollingsworth noted, but patients don’t ask how a PET scan works. “They don’t know what the hell to do with that,” he said.
As a company with AI-powered products, Carta doesn’t sell or really market AI itself, according to Hollingsworth. “They trust us to use AI in a way that doesn’t mess things up,” he said in reference to the risk Carta bears for the accuracy of its products.
Hollingsworth believes healthcare has a “powerful safety net” when it comes to preventing abuse of AI. There are frameworks for preventing medical malpractice, like insurance known as Med-Mal. The Food and Drug Administration is tasked with ensuring the safety and effectiveness of many AI-driven medical products. Other regulations, like HIPAA, protect patient privacy in data collection.
As long as AI is generating positive outcomes, patients should trust that experts like providers know how AI works and can leverage it to help them. “If you have a hard problem, you should be glad that AI is helping you,” he said.
RELATED
'Think beyond the label': How AI developers can build patient trust and improve transparency