FDA advisory committee to roll up sleeves on generative AI this week

The FDA will hold its first Digital Health Advisory Committee (DHAC) meeting to discuss how the agency should review medical devices that rely on generative AI, like chatbots.

The meeting will be held in person Wednesday and Thursday at the Holiday Inn in Gaithersburg, Maryland, and will also be available via webcast. The meeting is open to the public.

The independent experts on the committee will provide advice and recommendations to FDA about digital health technologies and identify risks with the products and the FDA’s regulatory structure. The first meeting is focused on the use of generative AI in medical devices.

“The novel capabilities of GenAI may offer unique benefits to patients and public health, but the use and adoption of GenAI also come with specific risks and complexities that challenge FDA’s approach to the regulation of devices,” an executive summary document for the meeting says.

FDA defines generative-AI-enabled devices as those for which generative AI is integral to the output or function of the device. Generative AI could offer a clinical diagnosis or diagnose and treat mental health conditions through the use of chatbots. 

DHAC members will discuss which data should be available to the FDA for premarket evaluation and how the FDA should evaluate performance and training data from the device. They will discuss what prospective performance metrics are most informative for generative AI.

The FDA will also ask the panel to discuss generative AI in terms of its differences to non-generative AI, which the agency has been evaluating for decades. The advisory committee members will discuss what information about the device is valuable for generative AI compared to non-generative, or predictive, AI. The committee will discuss risks to usability for generative AI compared to predictive AI and what information should be relayed to the user given the risks.

In addition to the considerations for premarket review of a generative AI medical device, DHAC members will tackle how the devices could be consistently monitored once they’ve been deployed in medical practice. They’ll discuss what tools are effective for post-market monitoring and how to monitor performance across regions.

The FDA also released documents about its thinking on generative AI for the digital health advisory committee and the public to consider during the discussion. The executive summary includes the FDA’s current regulatory structure for digital health technologies and some of the challenges it has identified for the regulation of generative AI.

Some of the primary concerns raised by FDA were about the unpredictability of the foundation model that the medical device may be built on when the foundation model itself is not medical-grade.

FDA takes issue with how to tackle the foundation model and ensure that it’s functioning properly premarket and post-market. The agency has established in guidance that developers bear the responsibility for the safe and effective use of the medical device when using off-the-shelf software. 

Foundation models for generative AI could raise similar concerns due to the lack of software life cycle control. It’s possible that the foundation model itself may be subject to FDA’s device regulatory oversight.

FDA is concerned about the data inputs to a foundation model and how that could bias the medical device—and the developer may not be able to do much to mitigate it. To address this, FDA says it may need information about the foundation model that was used to build the application. Developers should think about how to manage the foundation model even if it’s out of their control.

It also raised concerns for the generative and creative nature of generative AI, which on principle is expected to create new outputs for users based on the input data. 

“GenAI models can analyze input data and produce contextually appropriate outputs that may not have been explicitly seen in its training data,” the executive summary for the Nov. 20-21 committee meeting says.

FDA worries that generative AI could hallucinate and introduce uncertainty into the device’s behavior.

Concerns about generative AI leads to questions about the device’s intended use. For FDA, intended use is paramount to understanding what risk framework to apply to the device, which approval pathway it goes through and what tools are used to test the device once it’s deployed in the market.

“There are unique characteristics of GenAI that, as part of a product’s design without adequate risk controls, can introduce uncertainty in the product’s output and can make it difficult to determine the bounds of a product’s intended use, and therefore, whether it meets the definition of a device and is the focus of FDA’s device regulatory oversight,” the executive summary says.

Risk is a big theme of the document and is basically why FDA is holding the meeting in the first place. FDA has identified multiple potentials for risk with generative AI products, including their ability to hallucinate, thereby generating inaccurate content. There’s also the risk of the foundation models, which the medical device developer may not fully understand or be able to access.

The agency seems to caution developers when using generative AI and to decide whether it is appropriate in their device. The FDA specifically calls out the potential for generative AI to spread misinformation and manipulate known information into misinformation.

It also signals that the end user may be entitled to certain information about the product.

“This may include a variety of transparent information for the end user on the GenAI-enabled device, such as the device design, how the device was tested, the level of autonomy, and how users interact with the device (e.g., through prompt engineering), which may impact the device output,” the executive summary says. “It is also important to understand the level of autonomy for a GenAI-enabled device, and if, and how, it incorporates human-in-the-loop and affords the level of control to the end user.”

The public can submit comments on Regulations.gov to be considered by FDA after the meeting.