Generative AI brings great potential—and risks—to payer space

Widespread adoption of artificial intelligence in professional settings has swept the nation in the last year, with companies in all industries looking to capitalize on the new technology.

Health insurers are no different. Large companies are testing the capabilities of new tools, like ChatGPT 4.5 developed by OpenAI, to improve efficiency and productivity.

It gives payers the opportunity to automate call center interactions, pre-authorizations and claims denials and appeals, but there are still concerns among experts about the safety in using generative AI tools, experts said.

As of June 26, there were 295 data breaches in the first half of 2023, impacting more than 39 million people, according to a Department of Health and Human Services Office for Civil Rights portal, as reported by Health IT Security.

“The opportunity here, in my opinion, far outweighs the risk, as long as we can be smart about how we develop and deploy it,” said CCS Medical Chief Technology Officer Richard Mackey.


Malicious actors thriving in ‘underground economy’
 

Generative AI is a type of machine learning where artificial intelligence can “learn” and output information and answers through data patterns. Popularized by OpenAI through chatbots, it has since been copied by other companies like Google through the Bard platform. As of July, ChatGPT said they had more than 100 million users, becoming one of the fastest growing user bases for an application in history.

But like most technology, it is susceptible to threats from bad actors. According to Jerry Sto. Tomas, chief information security officer with HealthEdge, generative AI can threaten health payers through ransomware, accidental disclosure by employees through home devices or hacking of internet-facing web applications.

“Remember, the generative AI learns through a large language model, or LLM, and it’s only as good as training data as the user is trying to provide,” said Sto. Tomas. “Accidental disclosure of that type of data can happen if guardrails are not properly configured and the LLM is not contained, meaning that training data is exposed to the public and compromised. The other piece … is how we ensure maintaining regulatory compliance to HIPAA.”

At CCS Medical, Mackey said their company overlays their company-specific content on top of publicly available information to better secure their data using Microsoft Azure, which uses the engine from OpenAI, noting the challenge is to get a large enough volume of data to deliver accurate and consistent results.

They also licensed the ChatGPT engine so they can integrate publicly available information with private data or patient-related data. That way, CCS Medical can pull data from either data set, but it only feeds data into the private, proprietary data set not accessible to the public.

“What’s nice about that for a payer is that they’d be able to ask simple questions like, ‘Do you see evidence of this patient having gone through the first line of care?' or whatever the case may be," said Mackey. “And then it synthesizes data thousands of proprietary data specific to the patient, but it's not intermingled with the publicly available ChatGPT content from OpenAI. Then the specialist or the payer can review that answer, and if needed, go back to this specific references.

“It's that ability to synthesize very large volumes of unstructured data and be able to then drill down into it as needed," he added. “We've never seen a platform or tool do anything like that as elegantly as what this does.”

Sto. Tomas said other primary concerns are from insider threats, such as people feeding generative AI sensitive data through digital assistants like Amazon's Alexa or into chatbots and malicious threats from outside actors.

“Generative AI used to be hype and now it’s real,” he said. “The underground economy is already capitalizing and monetizing, which is to develop sophisticated malicious attacks against individuals and businesses, particularly through phishing, social engineering and identifying insecure code and vulnerabilities on businesses software.”

He said the underground economy has the layers of bad-faith individuals sowing mistrust beneath the surface, while generative AI’s promise is lauded in the "above economy."

“We’ve heard many times that generative AI is a disruptive technology, but in the underground economy it’s a destructive technology, because they’ve got this technology they can monetize in every industry, not just the healthcare space,” Sto. Tomas added.

Even though the term generative artificial intelligence, and the new technological tools it will bring, could send shivers up the spine of IT professionals, both Sto. Tomas and Mackey agreed that generative AI poses the same risks, just in a newer form than before.

“It’s not that we’re seeing brand-new, unheard-of risks before,” said Mackey. “We’re just seeing them in a new technology.”

So how do health payers protect against these modernized problems? There are solutions insurance companies can adopt to make sure they are looking out for their customers.

Tomas recommends building a governance structure or committee to “review anything related to generative AI” calling the technology a risk decision and not a technical decision it’s often considered.

He believes the structure should include evaluation and testing of input and output to ensure answers are accurate, non-biased in regards to race and socioeconomic status and safe, and vendor due diligence should occur through an architecture audit should take place.

Lastly, he thinks companies, like HealthEdge, should build use cases for administrative and operational efficiency.


Benefits outweigh risks, if done responsibly
 

Beyond overlapping private data with public data sets, it’s easy to be impressed by the potential of generative AI, even in its earliest stages.

Generative AI could unlock up to $1 trillion in value for the industry through developing previously unstructured data sets, according to a report from McKinsey. It describes a new normal where call center specialists can suddenly pull information from across plan types and files, speeding up the claims denial process.

At CCS Medical, Mackey said they can generate claims based on their diabetes patients’ use of products, where the company seeks pre-authorization with other payers and submits their own claims but usually have to deal with a tedious back-and-forth process.

A recent report from the American Medical Assocation reported that 93% of physicians experienced delays in patient care due to prior authorizations, and 82% said they abandoned treatment after insurance struggles. A standard review, without using generative AI, takes up to 10 business days, the Centers for Medicare & Medicaid Services has said.

Other benefits in the healthcare space include summarizing research papers, answering patient’s concerns and speeding up the clinical documentation process.

“We’ve been using chatbots to mimic a more natural language interaction between a user of a health plan and the health plan itself, or for providers and payers to interact with these tools and services,” said Mackey. “In the past, they've been clunky or don’t always answer or understand the question that was being asked. This idea of training models on content, that's been around for a while, and we've been at it for many years, but what's really a massive step forward in my view is just how easy these tools are to interact with and how accurate they are. They often get to 80-90% of what is really the intent of what’s being asked.”

This technology advancement allows physicians to utilize generative AI on unstructured data, not just data with structured queries, so they can work with more documents, pages, faxes and freeform notes.

Mackey also said he foresees a time when call center interactions are further automated, describing it as a “massive opportunity” for payers because of tools that have improved over a very short time span.

“If you get to a point where you're able to then have most of your interactions in a very easy, ask-and-answer kind of a way, you really get at the need to not have as much of a call center operation,” he said. “I’m not suggesting we’re going to get there in the next six months. But how much can we operate in the next one, two years? We’re not replacing doctors, and we’re not replacing call center folks overnight.”

Customer service jobs, not just in healthcare, represent 3 million jobs in the U.S., according to U.S. Bureau of Labor Statistics from May 2022.

Call center leaders are preparing for the shift, as 46% of leaders are planning to deploy or are already deploying LLM-based products within the next 12 months, according to a report released by health AI company Hyro, an organization that benefits from an industry shift to AI. It found, after interviewing call center managers and executives, that healthcare call center leaders currently struggle to prove ROI, that 39% of leaders say burnout and turnover are main drivers of inefficiencies and both big and small companies will implement adopt LLMs.


Regulation is still catching up
 

In addition to generative AI being a favorite weapon of choice for malicious actors, it’s also easy for those people to operate in the shadows since there are no regulations keeping them in check.

“It’s now easier for the threat actors to expand their hacker as a service business model in the underground economy, because using LLMs to identify exposures and vulnerabilities—including leaked credentials and personal information—in the above economy.

Even though health payers can do all they can to mitigate risks, it seems inevitable that regulations will be passed to curtail problems they face, especially after data breaches at OpenAI, Facebook and Samsung impact millions of people.

Regulations could include watermarks or designations for content that have been synthesized or summarized by generative AI to warn the user, said Mackey.

“These events will trigger regulators around the world to start drafting regulations,” said Sto. Tomas. “It is definitely coming. We need to anticipate what the future regulation that will actually be imposed against us.”