From 'transformative' to 'tremendous fear': Takes on ChatGPT in healthcare at ViVE 2023

NASHVILLE, Tennessee—From “transformative” to inducing heartburn in security experts, there was no shortage of opinions about the use of ChatGPT in healthcare at ViVE 2023 in Nashville, Tennessee last month.

Microsoft-backed OpenAI released ChatGPT late last year. ChatGPT is an AI language model, and it allows users to enter written prompts and receive new humanlike text or images and videos generated by the AI.

In February, Microsoft unveiled a new AI-improved Bing, which runs on GPT-4, the newest version of OpenAI's language model systems.

The excitement over the use of the technology in healthcare ramped up when Google's medically focused generative artificial intelligence model achieved 85% accuracy on a U.S. Medical Licensing Examination practice test, the highest score ever recorded by an AI model, according to preliminary results shared by the Google Health AI team.

Here's a collection of takes from healthcare execs and policy leaders at the conference:

Micky Tripathi, Ph.D., national coordinator for health information technology at the Department of Health and Human Services (speaking on stage)

"It's kind of remarkable how ChatGPT was made public, what, a month and a half ago, and I feel like last night just at dinner, I heard 10 ideas on the use of ChatGPT that people are either doing or contemplating doing in healthcare that I hadn't even thought of.

"I think all of us hopefully feel tremendous excitement and you ought to feel tremendous fear also. As we know, inappropriately used, there are equity issues, there are safety issues, there are quality issues, there's all sorts of issues without appropriate transparency and appropriate governance over how these are used at a local level. I'm not talking about big picture governance but more at the local level so that the users have a better understanding of what this algorithm is doing and how do we have a governance process in any one of these settings for what algorithms we use and make available."

Justin Norden, partner at GSR Ventures. He's also an adjunct professor at Stanford Medicine in the department of biomedical informatics research where he teaches courses on digital health and AI in medicine. 

"I think it's probably the most transformative technological shift in decades. We've just leapfrogged previous technologies and companies trying to build AI solutions to automate something, and in some cases, we're watching that kind of technology be solved overnight.

"If you just froze development now, so many problems that were previously unsolvable in healthcare could be tackled, think about prior auth issues, think about clinical registry, think about clinical notes, which is kind of where the most activity is right now.

"I think all these previously unsolved and impossible problems or pipe dreams of where we wanted to go are now possible even with just a few lines of code. It really is the most transformational piece of technology coming out and healthcare will be the last to adopt it. If you look at where technology has made the most impact in the past, it's still adding clinical burden time and it still made providers' lives worse, most people say it still leads to increased costs. I think we're at a point where those things are going to reverse."

Craig Richardville, chief digital and information officer at Intermountain Health (when asked what he thinks the buzzword of ViVE 2023 will be) 

“I gotta believe it’s got to be ChatGPT. There’s been some things around machine language, artificial intelligence—truly, ChatGPT has bumped it up to the next level. We’re starting to see products already starting to take the components of that and put it into their products. I can see where that will become, at some point, kind of a foundational element that you have within your application or within your product or service that you’re providing. 

“Still, it’s going to come down to what are you going to do with the outcome that comes from it. I think the answer to that ‘so what?’ question really becomes the opportunity to take advantage and drive value for our health system. For us, our value is to be moderate to low cost, great clinical outcomes and the experience not only for you as a patient but for all the caregivers that provide care. That becomes a big advantage.” 

Marti Arvin, chief compliance and privacy officer of Erlanger Health System (When asked whether appetite for ChatGPT is giving compliance professionals heartburn)

“Just heartburn? I’m think I’m probably gonna get an ulcer over here … 

“From what I understand about it—and I won’t profess to be any sort of expert on it—I’m concerned my clinicians are going to say ‘won’t this be great to create my notes? I’ll tell it to create a note about a patient that blah, blah, blah, and then pop the note up and this will be so much easier for me to document,’ with no recognition whatsoever about the security risk, where that data’s being stored, where it’s coming from, what they’re doing with it.” 

Sandeep Dadlani, chief digital and technology officer at UnitedHealth Group (on United AI Studio, launched a month ago, in partnership with Microsoft)

“I think, first of all, it’s a remarkable development in the industry, not because of the quality of the AI, per se, but because GPT was democratized.

“What we did, specifically, is we’ve really tried to put guardrails.  

“We partnered with Microsoft to get a secure, open AI version inside of already existing arrangements. And then we got our AI scientists to really filter some of the early use cases to see which ones are desirable, feasible and viable. 

“Our early experiments have been mostly in administrative use cases versus clinical use cases. Maybe the world will get there. But right now, we feel comfortable that there’s plenty to attack in administrative use cases. And the early results of pilots seem exciting. We haven’t scaled anything yet. We’ve been very cautious with it to make sure it works and works well with the end users. 

“We’re just taking away friction or taking away bureaucracy in the system, blockages in the system and connecting the system better, so that we—UHG employees—are not doing menial and repetitive work. They are meant for taking care of patients and members, that’s what I’m hoping everyone focuses on and spends time on versus administrative. 

“Even more importantly, the other things we realized is that next, new thing is six months away.” 

B.J. Moore, chief information officer and executive vice president of real estate strategy and operations at Providence 

“If I was here six months ago, I don’t know if I would be poking around at generative AI, but now it’s kind of the hot topic. Six months ago, I would have thought it was just PowerPoints and white papers and BS, but now it’s coming to fruition. 

“What I love is that it’s definitively inaccurate, right? I asked it about myself and it gave me an amazing bio on myself, it was incredible. Factually, half of it was incorrect, but it was flattering and said I went to Harvard. If it’s going to make stuff up it may as well be flattering.  

“But you can see the potential, right? You know, the clinician being able to make some bullets and being able to have it expanded out to some rich description, or the inverse of being able to take a huge document and be able to boil it down to some key nuggets. … I think with generative AI it’ll be the first time we actually get more out of the system than we put in. The model’s haven’t been trained, the models that you and I play with as consumers. As we begin to train those models on healthcare, Providence-specific data, I think it becomes much more valuable.” 

[In response to the timeline until it reaches that point]  

“I don’t think it’s that long, I think it’s 6-12 months. We’re already partnering with Microsoft to look at training against our models. … We signed a strategic partnership with Microsoft in July of 2019, and we’ve done a lot of collaboration. ChatGPT is interesting, so as part of that we’ll double down and experiment with that.”  

Michael Hasselberg, chief digital health officer at University of Rochester Medical Center

"I've never been more excited about technology advancements in healthcare than what I've seen over the last several months. I am convinced that it's not going to replace nurses and physicians.

"I am convinced that it's going to keep physicians, providers, nurses from leaving the profession. It's the first time I can see a path forward. All these problems that all of these startups have been working on for years trying to solve and as health systems we are trying to solve, we now have a technology solution that is good enough to fix all of those problems. 

"It could be a scary time for all these startups here in this space because literally overnight their technology stacks have become obsolete. As a health system, I'm thinking, I've got a technology right in front of me and I don't even have to be a computer scientist. I can actually build and spit out you, know, solutions and applications myself with ChatGPT 4. That is super, super exciting.

"My clinicians are really excited in the health system about it. I've actually had clinicians in tears where they literally break down in tears when they see the possibilities of ChatGPT 4. That being said, my privacy officer here is kind of scratching her head. We have to have a lot of meetings around what does this mean from a security standpoint."

Katya Andresen, chief digital and analytics officer at Cigna 

“I think it’s important to understand when we talk about AI, we’re talking about algorithms. We’re talking about rules, rules that you create and learning systems that you create. So the most important thing is to understand what’s going into them, what’s teaching your model. 

“I think that when people get afraid of AI or when they talk about AI, they’re thinking about going into ChatGPT 4 and asking medical questions. 

“I think that a lot of people are starting to equate AI with conversational interfaces that provide medical advice, and I think that is something that is very early days in ChatGPT 4. I think it’s important that we look at the entire continuum of everything going on in AI. 

“What I think is really powerful is begin able to leverage AI in a transparent thoughtful way to help us make better decisions or provide better guidance, with the appropriate governance. Just like you would with clinical pathways. So doctors spend a lot of time working together to develop clinical standards or pathways for different conditions, and you wouldn’t expect anyone to just invent a pathway without a lot of scrutiny. And we have to have the same care with AI. 

“I think it’s very exciting to think down the road of the possibilities of when we have greater accuracy, and the right use cases. It all comes down to use cases. There are use cases where it’s going to be great and use cases where it’s going to be dangerous, and we shouldn’t have this monolithic sense of what AI is or what the use cases are. So I hope we take an open mind of thoughtfully experimenting with where AI can make care faster, better, more standardized and more personalized. 

“I do think there’s a lot of hype and a lot of oversimplification of what we’re talking about.” 

John Brownstein, Ph.D., chief innovation officer of Boston Children’s Hospital 

“In pediatrics, it’s definitely similar to the adult world—obviously some different challenges in terms of consent and acceptance for the use of technology, but the applications are incredibly varied. And we’ve already been on this AI journey before OpenAI happened, which we’re using, and in fact we’re hiring for a prompt engineer and really doubling down in generative AI. … Actually, ChatGPT wrote our job description for the ChatGPT job that we put out. That got through HR, somehow.  

“There’s plenty of use cases but of course we have to be incredibly careful. There’s major research uses cases. We have to be very careful about the information that it’s surfacing. It often hallucinates facts, it creates references that we cannot find. So we have to be very thoughtful in its application. As an organization we’re making a commitment to doing this but doing so thoughtfully.”  

(He also notes they’ve been exploring generative AI through partnerships/programs with Microsoft/Nuance ambient listening, Ava/Amazon with Alexas in patient rooms for education and patient requests.) 

Jesse Fasolo, head of technology infrastructure and cybersecurity at St. Joseph’s Health 

“There’s definitely a privacy concern. … It’s in an infantile state where people want to try it, and I think [organizations should have] boundaries and borders around it and communication because we’re seeing people wanting to try it, access it. But you obviously have to advise your organization to [have] proper use cases for it and controls and boundaries of what you should and should not do with these types of technologies.  

“And ChatGPT is just one of many, there’s many out there that do certain things. But when I looked up certain data points, I prompted it to give me the sources and those sources were invalid. So, a great tool provides great data. If it’s giving you invalid data that you can’t trust, obviously this is a very big concern for patient care.” 

Sherri Douville, CEO of Medigram 

“I must have had 20 people, executives in the last week ask what to do about it. What I told all of them is ‘whatever you think of the internet is exactly what to think about ChatGPT.’ All of those large language models use the same underlying data and the common problem set, so whatever risk management you’re putting around the use of internet is exactly what you should be doing with ChatGPT.  

“On the functionality side, it does something very specific: You retrieve data. It’s not [machine learning]. People get very confused and just call everything AI and it all does the same thing. It doesn’t. It can do retrieval, and you would also need to use ML to help with the inference.”

Ashis Barad, chief digital and information officer at Allegheny Health Network 

“There are two things that fascinate me there. As a physician, it’s impossible to stay up to date with the literature, right? … The literature is really coming out at a pace that it’s near-impossible to stay up to date with. So I think of ChatGPT from a perspective of taking large amounts of data and being able to put that in ways that are consumable for the clinician. So I hope someone is —I’m sure someone is—working on that, to think about how to [present] evidence-based medicine that’s up to date. We know we’re years behind in the way we practice, so I think ChatGPT actually is a tool there. 

“The second is ambient listening, I think there’s some neat stuff happening there. I would love to remove keyboards from inpatient rooms.”