Welcome to the Machine: FDA Issues White Paper on AI and Medical Products

Footnotes for this article are available at the end of this page.

For music aficionados, many would agree that the rock band, Pink Floyd, was ahead of its time. “Welcome to the Machine,” from the 1975 album Wish You Were Here, was one of those transformative songs for one of this Bulletin’s authors. Nearly 50 years later, the song still stands out with its imagery of the “machine” and its opening and closing. The Food and Drug Administration’s recent White Paper on artificial intelligence (“AI”) and machine imagery did not escape us.

On March 15, 2024, FDA released a White Paper, “Artificial Intelligence & Medical Products: How CBER [Center for Biologics Evaluation and Research], CDER [Center for Drug Evaluation and Research], CDRH [Center for Devices and Radiological Health], and OCP [Office of Combination Products] are Working Together” (“the Paper”), describing how the different Centers will coordinate regulatory review of AI technologies.1 FDA defines AI as:

A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. AI includes machine learning, which is a set of techniques that can be used to train AI algorithms to improve performance of a task based on data.2

This Bulletin highlights some of the key points in the Paper.

Highlights

  • FDA noted that a company should carefully manage AI throughout a medical product’s lifecycle to help address ongoing model performance, risk management, and regulatory compliance of AI systems.
  • AI systems require a risk-based regulatory framework that can be applied across AI applications to address specific product uses and needs.
  • The Centers focus on four areas for the development and use of AI in medical products:
    1. promoting collaboration to protect public health;
    2. advancing the development of regulatory approaches that support innovation;
    3. promoting the development of standards, guidelines, best practices, and tools for the medical product lifecycle; and
    4. supporting research related to the evaluation and monitoring of AI performance.
  • The Centers intend to engage in collaborative partnerships with product developers, patient groups, academia, global regulators, and other interested parties to develop a regulatory framework that addresses patient-specific needs. FDA will:
    1. seek comments on critical aspects of AI use in medical products, such as transparency, cybersecurity, and bias;
    2. promote educational initiatives to support collaborative efforts among interested parties involved with AI and medical product development; and
    3. continue to work with global partners to encourage international collaboration and consistency in the use and evolution of AI across medical products.
  • FDA plans to develop policies that provide regulatory predictability and clarity for the use of AI, such as:
    1. reviewing trends to detect potential knowledge gaps, including in regulatory submissions, to help minimize product development and review delays;
    2. supporting regulatory science efforts to develop methodologies for evaluating AI algorithms, including identifying and mitigating bias;
    3. building on existing initiatives for the evaluation and regulation of AI use, including in manufacturing; and
    4. issuing further guidance on the use of AI in medical product development and medical products, such as:
      1. final guidance on marketing submission recommendations for predetermined change control plans for AI-enabled device software functions;
      2. draft guidance on lifecycle management considerations and premarket submission recommendations for AI-enabled device software functions; and
      3. draft guidance on considerations for the use of AI to support regulatory decision-making for drugs and biological products.
  • The Centers note they will build on Good Machine Learning Practice Guiding Principles by:
    1. developing considerations for evaluating the safe, responsible, and ethical use of AI in the medical product lifecycle;
    2. identifying and promoting best practices for long-term safety and real-world performance monitoring of AI-enabled products;
    3. exploring best practices for documenting and ensuring that data used to
      train and test AI models are appropriate; and
    4. developing a framework for quality assurance of AI-enabled tools.
  • The Centers state that they will support demonstration projects that:
    1. detect points where bias can be introduced and how it can be addressed;
    2. promote health equity (e.g., diversity and inclusion efforts); and
    3. support the ongoing monitoring of AI tools in medical product development within demonstration projects to maximize compliance with performance and reliability throughout the lifecycle.

AGG Observations

  • It is a positive sign that multiple FDA Centers are working together to address an ever-evolving AI area.
  • FDA recognizes that AI technologies are becoming more commonplace and acknowledges their potential benefit in advancing public health.
  • The Paper allows industry to review FDA’s current thinking on AI and medical products, its areas of focus, and what its next steps might be.
  • To quote Pink Floyd, “Welcome to the machine; Where have you been; It’s all right, we know where you’ve been.” FDA has been busy trying to navigate the regulatory issues with AI and product development. We know where AI has been; we don’t know yet where FDA regulation of AI will go.

 

[1] https://www.fda.gov/media/177030/download.

[2] Id.