President Joe Biden is set to sign a sweeping executive order establishing new safety, security and equity standards for artificial intelligence, including its use within the healthcare industry.
Headlining “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems” is a requirement that developers of AI that pose “a serious risk” to national public health and safety share training data with the federal government, according to an announcement from the White House. The requirement extends to systems with national security and national economic security implications.
Within healthcare specifically, the president is instructing the Department of Health and Human Services to “establish a safety program to receive reports of—and act to remedy—harms or unsafe healthcare practices involving AI,” according to the White House’s announcement. The administration will also be expanding grants for AI research in healthcare.
Per a draft of the order obtained by Politico, the department will have a year to develop its broad strategic plan on “responsible” use of AI.
HHS will also be tasked with exploring the intersection of nondiscrimination laws and AI, building an AI safety program for detecting harm-related incidents and be given 180 days to determine whether current AI is accurate enough for use in healthcare, Politico reported.
Other components of the executive order include: a government-developed standard for AI safety testing to be used prior to public release, safeguards to protect individuals whose data is used in large-scale model training, standards protecting against AI synthesis of “dangerous biological materials,” best practices for detecting fake media created by AI (so-called “deep fakes”) and principles mitigating the harms of automation on the healthcare and broader workforce.
Biden is scheduled to sign the order Monday afternoon. His administration called on legislators to follow suit.
“The actions that President Biden directed today are vital steps forward in the U.S.’s approach on safe, secure and trustworthy AI,” the White House wrote in its announcement. “More action will be required, and the administration will continue to work with Congress to pursue bipartisan legislation to help America lead the way in responsible innovation.”
Monday’s executive order is the latest in a string of early and exploratory AI efforts launched by the administration. Those have so far consisted of risk management frameworks, strategic plans and voluntary commitments from 15 AI developers like Google, Amazon, Microsoft, OpenAI, IBM and Nvidia.
AI—and particularly the latest wave of generative AI technologies—have picked up plenty of attention within healthcare due to their promises of reduced administrative burdens and clinical decision support. Despite a slew of big tech-healthcare partnerships working toward hashing out the technology’s stumbling points, most patients say they don’t trust AI to be used in a healthcare setting.