President Joe Biden's sweeping executive order on artificial intelligence marks a significant move to address the accountability of how AI technology is developed and deployed.
Industry leaders see the potential to harness AI to accelerate diagnosis, enable more precise treatments, aid drug development and ease the documentation burden for doctors. The EO comes amid growing excitement and experimentation with large language models and generative AI in healthcare. But there are risks with AI tools.
Currently, the AI industry is the "wild, wild West," said Patrick Bangert, senior vice president at tech and AI consulting firm Searce. "There are almost no rules and many companies invent their own rules and they are essentially unchecked. I think regulation is absolutely essential."
The order provides a pathway for the industry and the federal government to begin to address issues around privacy, security, safety and equity in the use of AI while continuing to advance innovation.
"It's an important step forward and the devil is in the details as we think about how agencies will try to implement the directives in the EO, that's going to be the key thing," Rachel Stauffer, senior director at McDermott Consulting.
The implications of the executive order for the healthcare industry go beyond just the healthcare-specific directives, Stauffer noted. "There are also other directives that are likely to impact the use of AI in healthcare, for example, around privacy and security and around data sharing."
The executive order also requires agencies to take action within six months to a year.
"Stakeholders are going to have to pay attention to the subsequent sheer volume of guidance and strategy coming out of the task force and all of those things that are required because it's going to happen pretty quickly," she added.
Brigham Hyde, Ph.D., CEO and co-founder of Atropos Health, a real world daa platform, said he viewed the AI executive order as a "balanced, thoughtful approach … However, the term definition section leaves a lot for interpretation. We look forward to public comment on these important details."
"Our hope at Atropos Health is to continue to future-proof these regulations, as a method for fueling innovation. At Atropos our core beliefs are focused on transparency, auditability and methodological rigor. The technology to enable that is already in existence and responsible use of AI will hinge on these critical elements," Hyde said.
But many industry experts are skeptical about how AI can be effectively monitored and reined in given that regulations are slow to keep up with the pace of innovation. Others voiced concerns that the EO only establishes voluntary measures.
Biden's EO does not give agencies new powers to regulate AI outside the national security area. The order uses the Defense Production Act to require AI companies to report to the government the results of safety tests and other information when they train AI systems that might have national security or critical infrastructure risks.
"It has no teeth," said Bangert. "The contents of the Executive Order are very similar to the European Union AI Act with one notable exception, there is no penalty. If you are not compliant with the Executive Order, what happens to you? Well, according to the Executive Order, not much."
The executive order, released Oct. 30, aims to create new safety, security and equity standards for AI, including its use within the healthcare industry. The EO establishes principles, tasks federal agencies with developing methods of testing AI usage in their work and codifies governmental oversight of AI development by private companies.
Within healthcare specifically, the president is instructing the secretary of the Department of Health and Human Services (HHS) to "establish an AI Task Force that must develop a strategic plan that includes policies and frameworks, including potential regulatory action, on responsible deployment and use of AI and AI-enabled technologies in the health sector within 365 days," according to the White House draft of the order.
This guidance must cover specific areas, including the incorporation of safety, privacy and security standards into the software development life cycle for personally identifiable information. This strategic plan will pertain to research and discovery, drug and device safety, healthcare delivery and financing and public health.
After a detailed 180-day study mandated by the EO, HHS is required to issue a strategy on whether AI technologies in the health and human services sector “maintain appropriate levels of quality.” HHS then must take appropriate actions to ensure that healthcare providers who receive federal funding comply with nondiscrimination requirements when utilizing AI technology.
The department also is directed to develop a system of premarket assessment and postmarket oversight of AI-enabled healthcare technology.
"I think there needs to be a regulatory framework for advanced technologies and the industry needs to understand the guardrails and how they need to comply with not only the regulations, but the intentions of those regulations," Peter Bonis, M.D., chief medical officer at Wolters Kluwer, an information services company that developed the clinical decision support tool UpToDate. "Whether or not it goes far enough, or goes too far, I think depends a bit on perspective and how the industry evolves. I think there is potentially a trade-off between regulations and innovation. But we have yet to see whether or not that actually will impede meaningful innovation."
Luciana Borio, M.D., a venture partner at Arch Venture Partners who served as acting chief scientist of the FDA from 2015 to 2017, believes the evolving AI industry will pose enormous challenges for the FDA.
"The fact is that this is going to move at such a rapid speed, it's going to be virtually impossible for them to do so under the current way that they regulate. AI is going to learn and adapt and self-correct much faster than anybody can make submissions to the FDA for review," she said during an AI panel at the Milken Institute's Future of Health Summit this week.
"We have to think about new ideas. The other risks that we see is that we're going to see this bifurcation where we're going to see a lot of progress perhaps outside the U.S. and in the consumer-facing world of healthcare, but it's just under the radar for what FDA would regulate. We're going to miss out on the promise of bringing wellness and health under one continuum," Borio said.
There remain a lot of open questions from a regulatory standpoint, agreed Brian Anderson, M.D., chief digital health physician at MITRE, who also spoke on the Milken Institute AI panel.
"Does the FDA even have the regulatory process to enable them to proactively monitor models absent and adverse event reporting incident? That's an open question that lawyers are arguing about right now," he said.
There's also the risk that any future regulation or standards will delay or stifle innovation.
"Most people are not receiving optimal care. AI can improve that, but we have to be careful not to hold it to a standard that is impossible because the absent of deployment of this technology, if you hold it to the single standard, means status quo, which is really not acceptable," Borio said. "These are discussions that we haven't really grappled with. What is the standard that we hold it to, that it's as good as the current standard of care? Isn't that inferior? Does it have to be perfect? What's perfection? Every time we see a safety issue, is it going to bring it to a halt? We're going to have to collectively decide on what's acceptable and what's not."
Many industry experts are optimistic that the executive order lays the groundwork for private-public partnerships, which will be key to moving forward on AI guardrails.
"I'm comforted in seeing that there's a lot of language about working with innovators, in this space because regulation done absent being informed by innovators could create, I think, a stifling environment," Anderson said.