Won’t Get Fooled Again: FDA Tells a Drug Company Don't Rely on AI

Footnotes for this article are available at the end of this page.

Key Takeaways

  • FDA issued a Warning Letter citing a drug company for inappropriate use of AI in pharmaceutical manufacturing, marking what appears to be the first time the agency has specifically addressed AI reliance in an enforcement action.
  • FDA’s position is clear: AI-generated documents and outputs must be reviewed and cleared by authorized human representatives before use in Current Good Manufacturing Practices (“cGMP”) activities.
  • Companies cannot rely on AI agents to identify applicable regulatory requirements. The quality unit remains responsible for ensuring compliance under existing cGMP regulations.

FDA recently issued a Warning Letter to a drug company about the inappropriate use of artificial intelligence (“AI”) in pharmaceutical manufacturing, reminding us of The Who’s 1971 song, “Won’t Get Fooled Again.” In a world where AI is becoming so prevalent that non-usage may put a company at a commercial disadvantage, FDA’s admonition to the company seems to be a warning shot to industry to not “get fooled again” by assuming AI tools can independently ensure regulatory compliance.

FDA Warning Letter on AI Use in Drug Manufacturing

The Warning Letter focused on typical cGMP and quality concerns. The Warning Letter was issued to the drug company following an inspection of its drug manufacturing facility that identified significant cGMP violations including insanitary conditions (insects, filth, leaves, and clutter observed at the facility), failure to conduct microbiological testing of finished products before release, failure to test components for identity and purity, and broad quality unit failures. FDA found that the company’s response was inadequate, failing to provide supportive documentation or adequate evidence of corrective actions.

What especially caught our eye was a statement in the Warning Letter about AI:

During the FDA inspection… you stated to FDA investigators that you utilized artificial intelligence (AI) agents … to help your firm comply with FDA regulations. Specifically, you used AI to create drug product specifications, procedures, and master production or control records to be in compliance with FDA requirements.

If you use AI as an aid in document creation, you must review the AI generated documents to ensure they were accurate and actually compliant with CGMP. Your failure to do so is a violation of 21 CFR 211.22(c) [the quality control unit shall have the responsibility for approving or rejecting all procedures or specifications impacting on the identity, strength, quality, and purity of the drug product]. Overreliance on artificial intelligence for your drug manufacturing operations was also documented during the inspection. For example, the FDA investigators found that you had not conducted process validation prior to distribution of your drug products, as required… and informed you as such. You replied that you were not aware of the legal requirements, as the AI agent you used … never told you it was required.

… If you plan to resume drug production, and use AI to help with CGMP activities…, any output or recommendations from an AI agent must be reviewed and cleared by an authorized human representative or your firm’s QU [Quality Unit]…

What FDA’s AI Enforcement Action Means for Drug Manufacturers

  • We have written bulletins1 about FDA’s guidance documents relating to AI and medical products. FDA clearly sees the benefits of AI use in product development. However, complete reliance on AI, without human involvement and supervision, brings risk.
  • Companies should not assume that AI tools will ensure regulatory compliance. Companies should ensure their quality units are actively reviewing and approving any AI-generated specifications, procedures, and records.
  • The fact that the company told FDA investigators that “the AI agent… never told you it was required” regarding process validation underscores a critical point: AI is not a substitute for regulatory knowledge and expertise. Companies must maintain personnel with sufficient knowledge of cGMP requirements, regardless of the AI tools they deploy.
  • There is no specific statutory or regulatory provision to which FDA can cite a company for AI usage. The agency will cite to the underlying regulation for non-compliance (for example, the GMP regulations in the case of this Warning Letter).
  • Yes, there are benefits of AI usage in the life sciences industry, but there are also risks. While companies may feel pressure to embrace AI to remain competitive, FDA’s Warning Letter suggests industry should not “get fooled again” by treating AI as a substitute for qualified human oversight.
  • Companies that use or are considering using AI in manufacturing, quality, or compliance operations should proactively develop internal policies governing the use of AI tools. These policies should address, at a minimum, human review and approval requirements, documentation of AI-assisted activities, and training for personnel who interact with AI-generated outputs.

In sum, one Warning Letter does not make a trend. However, it does indicate that FDA can, and will, look to company AI usage if it has regulatory concerns about compliance. Companies using AI in cGMP activities should ensure robust review processes and should not treat AI outputs as a substitute for qualified human judgment and regulatory expertise.

For guidance on these issues, please contact a member of AGG’s Food & Drug team.

 

[1] See AGG Bulletins dated June 15, 2023 and April 8, 2024.