Google, Microsoft execs share how racial bias can hinder expansion of health AI

ARLINGTON, Virginia—As new generative AI models like ChatGPT gain popularity, some experts are saying that to ensure such tools work in healthcare, implicit racial biases baked into health data must be accounted for. 

Officials with Google and Microsoft discussed the use of AI in healthcare during the Healthcare Datapalooza event held Thursday in Arlington, Virginia. There is a lot of excitement around the potential for AI models like ChatGPT—a chatbot that crunches massive data sets to generate text, video and code—for healthcare use cases.

The goal is for AI to one day “support clinical decision-making [and] enhance patient literacy with educational tools that reduce jargon,” said Jacqueline Shreibati, M.D., senior clinical lead at Google. 

However, there are gaps around the use of these models in healthcare. Chief among them is that clinical evidence is always evolving and changing. 

Another key problem is the data themselves may have racial bias that needs to be mitigated. 

“A lot of [health] data has structural racism baked into the code,” Shrebati said.

The challenge is how to address those biases to ensure that any results from the use of AI are equitable, a major priority in the healthcare industry. 

Microsoft, which is an investor in the company that developed ChatGPT, is aware of the anxiety in the healthcare space over AI, said Michael Uohara, M.D., the chief medical officer leading the software giant’s healthcare efforts in the federal sector.

“Our approach at Microsoft is to not just retroactively look at our products and say ‘hey, we need to be more responsible,’ but actually start end to end in the product development life cycle,” he said. 

Another key issue is to be transparent across the design process and to lay out clearly what an AI tool is capable of accomplishing. 

There continue to be a lot of questions on how ChatGPT can be applied to healthcare. A recent editorial in Stat News found that while the chatbot’s data set has the Current Procedural Terminology code set, there were problems with the sources it gave for certain diagnoses. 

Digital platform Doximity rolled out its own beta version of a ChatGPT tool for doctors aimed at helping with administrative tasks such as drafting preauthorization requests to insurers. Doximity is hoping to create a set of medical prompts that can act as draft letters to insurers, appeal denials and post-procedure instructions for patients.