Addressing a room of healthcare professionals at the IHI Forum on Wednesday, Peter Lee, corporate vice president of research and incubations at Microsoft, opened by saying the audience was possibly the most important one he'll face this year.
He took the stage to discuss the good, the bad and the future of AI, "but before doing that, the most important thing is to actually come to grips with what the heck is this AI stuff anyway,” Lee joked to listeners' giggles.
He continued, his tone turning serious: “That is actually something that is not just a funny question, but it can be life or death.”
Lee oversees several incubation teams at Microsoft for new research-powered lines of business, the largest of which is the company’s growing healthcare and life sciences effort. He is a member of the National Academy of Medicine and co-authored a book, published earlier this year, “The AI Revolution in Medicine: GPT-4 and Beyond.”
It is wrong to think of large language models like GPT-4 as traditional computers, Lee said. They do not do perfect calculations and are incapable of perfect memory recall. GPT-4 is a reasoning engine, and it can make mistakes just like a human can.
“My recommendation today is for the human doctor or nurse to do your own work,” Lee said, and then to use generative AI as a “second set of eyes.”
One area where generative AI has been shown to be powerful, per Lee, is in communicating with patients. He cited one recent study that found that patients ranked their communication with AI as more empathetic than when talking to physicians.
“A machine has the tireless ability to just add those extra personal touches,” Lee said.
Clinicians can learn to better step into the shoes of someone else and practice empathetic techniques from machines. Lee termed it “reverse prompting.” Just as a human can prompt the machine for a response, so too can the machine from a human.
Lee mentioned other use cases for AI that show promise to providers and patients to date, from session transcriptions and note summaries to clinical trial matching to drafting prior authorization requests, an area Microsoft where is currently experimenting. Generative AI can also help patients understand their lab results, Lee said, or interpret an explanation of benefits.
“In my interactions with C-suite executives at major health insurance companies, I’ve learned that they don’t have an ability to read these things either,” Lee joked.
Though the potential of AI across healthcare is huge, Lee said it carries serious risks, such as so-called hallucinations. That’s why the early real-world deployments of large language models have mostly been in areas pertaining to administrative burdens, rather than to clinical decision-making.
“It’s still very early days. We’re all learning,” Lee concluded. “There is no stopping this now.”