With the release of ChatGPT, what had long been science fiction became a reality, leaving healthcare wondering where medicine meets machine.
This week, Stanford Medicine and Stanford Institute for Human-Centered Artificial Intelligence (HAI) launched Responsible AI for Safe and Equitable Health (RAISE-Health) with the goal of addressing critical ethical and safety issues regarding the technology.
RAISE-Health will be co-led by Stanford School of Medicine Dean Lloyd Minor, M.D., and Stanford HAI co-director and computer science professor Fei-Fei Li, Ph.D. The two will be charged with establishing a go-to platform for responsible AI in health and medicine, defining a structured framework for ethical standards and safeguards and regularly convening expert discussions on the subject.
“AI is evolving at an incredible pace; so, too, must our capacity to manage, navigate and direct its path,” Li said in a press release. “Through this initiative, we are seeking to engage our students, our faculty and the broader community to help shape the future of AI, ensuring it reflects the interests of all stakeholders—patients, families and society at large.”
RAISE-Health lists its goals as enhancing clinical care outcomes through responsible AI, accelerating research and educating patients, care providers and researchers.
A Pew Research survey of 11,000 U.S. adults published in February of this year found that 60% of Americans would feel uncomfortable if their healthcare provider relied on AI for their medical care.
However, the country was nearly evenly split when asked if AI would lead to better, worse or a negligible change in health outcomes. Still, slight trust in the technology was revealed with 38% saying the technology would lead to improved outcomes.
More Americans, 40%, think the use of AI in health and medicine would reduce rather than increase the number of mistakes made by healthcare providers. When it comes to bias, just over half of respondents believe that issues of bias and unfair treatment would improve if AI was used more often in diagnosis and treatment; 15% felt the inverse.
As for the personal, emotional aspect of care, 57% said the use of AI would make a patient-provider relationship worse. Only 13% said the tech would improve that relationship.
“Though Americans can identify a mix of pros and cons regarding the use of AI in health and medicine, caution remains a dominant theme in public views,” the study read. “When it comes to the pace of technological adoption, three-quarters of Americans say their greater concern is that healthcare providers will move too fast implementing AI in health and medicine before fully understanding the risks for patients.”
Li’s research has focused on ambient intelligence, the practice of using AI to monitor and respond to human activity in homes and hospitals. Her recent publications have included robotics research in tool manipulation, deep neural networks and the use of computer vision to increase safety standards in hospitals.
Researchers from the Stanford Center for Research on Foundation Models, part of Stanford HAI under Li’s direction, recently responded to the National Telecommunications and Information Administration (NTIA) on AI accountability policy. HAI’s response stated that changes in the models represent a broad shift in AI and related misuse and abuse.
“These assets determine much of the digital supply chain, thereby intermediating dependencies between organizations (e.g., Khan Academy depends on OpenAI because GPT-4 powers Khanmigo13) and, in turn, the sectors affected by foundation models,” HAI wrote. “The ecosystem view makes clear where existing sector-level regulatory authority can be used to hold foundation models, the companies that provide them and their downstream products and services (e.g., in medicine or law) to account.”
Minor has held the position as the dean of the Stanford School of Medicine since December 2012. He also teaches as a professor of Otolaryngology (head and neck surgery), bioengineering and neurobiology. His book, "Discovering Precision Health," describes a shift to patient-centric value-based care aided by technological advances.
In 2021, Minor shared his multidecade plan for life sciences at Stanford including leveraging information sciences, technology and biology and biomedicine to establish a “biomedical innovation hub” to translate tech advancements to improvements in human and planetary well-being.
“AI has the potential to impact every aspect of health and medicine,” Minor said in a press release. “We have to act with urgency to ensure that this technology advances in line with the interests of everyone, from the research bench to the patient bedside and beyond.”