Many open questions remain on AI risks, liability that will likely play out in court, experts say

President Joe Biden's executive order on artificial intelligence gives marching orders to federal agencies to take concrete actions in the coming year.

The order, released Oct. 30, aims to create new safety, security and equity standards for AI, including its use within the healthcare industry. The EO establishes principles, tasks federal agencies with developing methods of testing AI usage in their work and codifies governmental oversight of AI development by private companies.

But the U.S. Department of Health and Human Services (HHS) and other government agencies face major challenges in setting rules and policies around AI. For one, there are already other ongoing efforts "in flight" to regulate AI or establish principles around its use, Peter Bonis, M.D., chief medical officer at Wolters Kluwer, an information services company that developed the clinical decision support tool UpToDate, noted. A year ago, the Biden administration released the AI Bill of Rights, an unenforced call to action laying out voluntary guidelines that companies developing or deploying AI can follow to safeguard people from misuse or abuse.

In April, the Office of the National Coordinator for Health IT, within HHS, released a proposed rule that would require electronic health record systems using predictive tools like AI and algorithms to provide users with information about how that technology works, including a description of the data it uses.

The FDA already regulates AI- and machine-learning-enabled medical devices as well as software as a medical device.

In addition, some states are moving forward to pass legislation to rein in the use of AI as well.

"All of these agencies need to work together to create a comprehensive and understandable set of regulations that don't just confuse the industry," said Bonis.

The executive order also doesn't address one of the key issues related to the use of AI in healthcare: liability.

"That is the huge, overhanging elephant in the room here," Patrick Bangert, senior vice president at tech and AI consulting firm Searce, said in an interview. "If I misdiagnose you, I prescribe you the wrong medicine, I put your CT scan under somebody else's name, who's liable for any of these things? We have no idea. The Executive Order doesn't doesn't spell anything out like that. This is going to lead to untold legal action."

He added, "The real beneficiaries here are the lawyers. The standards, no matter what they will be, will be vague. I expect the courts will become busy."

Donald Rucker, M.D., chief strategy officer for 1upHealth, also believes that the lack of clarity will result in many issues around AI being "cleaned up in court."

"People will use algorithms, they won't work. There will be accountability. This is a technical problem and will be solved. You can put explanation layers, explanation tools, explanation weighting functions into all of these algorithms," said Rucker, who served as the former national coordinator for health IT within HHS, during the recent Fierce Health Payer Summit.

During a discussion about AI at the Milken Institute's Future of Health Summit this week, panelists also agreed there is currently a lack of clarity on who will be held liable if the use of AI leads to an adverse event or negatively impacts patient care.

"I don't know that it's clear who's liable. I think it's one of those instances where it may end up having to play out in courts," said Luciana Borio, venture partner at Arch Venture Partners.

Brian Anderson, M.D., chief digital health physician at MITRE, noted that doctors use clinically validated risk scores and other decision support tools as a part of patient care but there is transparency about the clinical evidence supporting those tools.

"As a physician, I'm comfortable using those risk scores because I see the peer-reviewed journal article that describes where that data came from and how that calculator was built. In AI, we don't have that kind of transparency right now. And so if physicians are going to be held accountable and liable for the use of these models, we need a level of transparency in how these models are trained, what their accuracy or their performance scores are," he said during the Milken Institute AI panel.

While Biden's EO seeks to lay the groundwork to ensure safety, security and privacy in the use of AI, there remain many open questions, experts say. 

The problem, many stakeholders say, is that the healthcare industry and the federal government are still running to catch up to even understand the underpinnings of AI and large language models.

"There's essentially zero understanding of large language models and generative AI in any of the federal agencies," Rucker quipped during the Fierce Health Payer Summit.

Adrian Aoun, founder and CEO of Forward Health, noted that regulatory policies are difficult to develop when it's challenging to know how AI is going to play out.

"We need to better understand the problems before we try and regulate them. We're just so trigger happy in the regulatory world. I'm actually for regulation in AI, but I just think that we're shooting from the hip right now," he said.

Is the EO just adding more complexity?

Many technology executives are concerned that the reporting requirements will be a big lift for some developers, will increase costs and even conflict with current AI regulations under agencies like the U.S. Food and Drug Administration (FDA).

There are now more than 520 market-cleared artificial intelligence (AI) medical algorithms available in the U.S., according to the FDA, as of January 2023. The vast majority of these are related to medical imaging.

"The FDA has put in very strict guidelines on what needs to be done in order to validate a technology that can impact human safety in the medical arena and AI has to follow the same regulations and guidelines. The executive order is unnecessary for any technologies that go through the FDA, as we already have a regulatory framework," said Leo Grady, the former CEO of Paige.AI, a company that develops AI-based software in pathology. Paige received the first FDA approval for an AI product in digital pathology.

He added, "Within healthcare, certainly these standards and these values of equity and fairness, quality and robustness are important. And within healthcare, we've actually faced these demons for a long time."

Bonis also contends that Wolters Kluwer, as a healthcare technology solutions company, already adheres to strong standards when developing tech for healthcare.

"Healthcare is a very regulated environment and we've been operating in that environment for a long time. When we develop new applications, and we operate in a very high-stakes domain around clinical decision-making, so we are extremely rigorous and have developed our own internal policies around using advanced technologies, including generative AI," he said, adding, "We have the horsepower to understand the regulations on a global scale and we have compliance mechanisms which are in place and to include evolving healthcare regulations in our efforts to bring new assets to market."

But, Srini Iyer, chief technology officer, health group at Leidos, is concerned that the EO doesn't go far enough as it could be too easy for companies to circumvent the reporting requirements. Companies could say their AI models don't fall under the government's definition for the reporting requirements in order to avoid enforcement, he noted.

"I think there's going to be shadow AI. People are going to be doing stuff and not reporting it because they're going to look for that loophole. We can govern to a point where people just start going around us. How can we make it transparent and open so that people are reporting and feel that it's OK to report, that they're not going to get penalized for reporting," he said during the Milken Institute AI panel.

Grady also points out there there is a "world of medical technologies that fall outside the FDA," solutions like "electronic medical records, resupplying meds or inventory at a hospital" or technology that influences hiring practices within a hospital.

"Those are not subject to the FDA and there is no regulatory framework there that ensures a reduction in bias. But I think that's not an AI problem. I think that's a general problem within the medical and healthcare world and maybe within society, and I think singling out AI, especially in the absence of a concrete definition of AI, is inadequate because the issues we face go far beyond AI," he said.

And some industry stakeholders are concerned that any policies or guidelines that come out of the HHS AI task force will ultimately benefit larger tech companies like Amazon Web Services, Google, Microsoft and Oracle.

"At the moment, the president has given broad and rough strokes that now need to be fleshed out with details. And who's going to supply those details? It's the market, which means the standards will be defined by the very companies that are supposed to be compliant with the standards," said Bangert.

Michael Abrams, co-founder and managing partner of Numerof & Associates, contends that the directives in the EO are "reinventing the wheel" rather than building on what's already been established through FDA regulation. And the directives will ultimately disadvantage smaller companies, he claims.

"Small companies will now have the threat of enforcement on top of the the enormous cost of compliance. I can only imagine the red tape that will be involved if you are a company contemplating doing work in this space. It will drive small companies out of the space and it will advantage large companies," he said.

Increased compliance requirements may initially raise costs, noted Dave Latshaw II, Ph.D., CEO and co-founder of BioPhy, an AI technology that accelerates the development of drugs undergoing clinical trials.

But regulations can also act as a checks-and-balances system, "ensuring that big tech does not monopolize the AI life sciences space while still allowing smaller companies room to grow with more specialized systems," he said. "As regulations become more defined, we can expect a surge in opportunities for AI specialists in life sciences, driven by a wave of investments in advanced AI technology."