Paranoid Android: FDA Advisory Committee Continues to Evaluate AI Mental Health Tools

Footnotes for this article are available at the end of this page.

Key Takeaways

  • FDA oversight of AI mental health tools is accelerating. The Digital Health Advisory Committee’s (“DHAC’s”) November 2025 meeting highlights FDA’s increased attention to digital health technologies such as mental health chatbots which utilize generative AI.
  • Regulatory expectations for digital health will focus on risk, transparency, and ongoing performance. DHAC discussions and public comments show that evidence standards, clear labeling, crisis escalation safeguards, and post-market monitoring will be central to how FDA evaluates AI mental health applications.
  • Clear boundaries between wellness and clinical use will be critical. Determining when AI mental health tools cross into regulated medical device territory — especially in the absence of clinician involvement — will influence future FDA guidance and compliance obligations for developers and providers.

“What’s that? (I may be paranoid, but no android),” Radiohead once sang. That familiar tension between human judgment and technology echoes strong in today’s debate about artificial intelligence (“AI”). Rapid advances in AI are pushing the healthcare industry to reconsider long-standing assumptions about how care is delivered.

The federal government is now grappling with the reality that existing regulatory frameworks cannot keep pace with the breakneck speed of AI development.”1 The Food and Drug Administration has already been evaluating how AI-enabled tools may fit within existing regulatory frameworks. FDA’s Digital Health Advisory Committee (“DHAC”) represents one of the agency’s early efforts to evaluate these issues as it relates to digital health technologies.2

In this Bulletin, we review recent progress from DHAC, which met on November 6, 2025, to evaluate generative AI mental health tools, including the use of chatbots marketed for therapy, coaching, or emotional support.3 One key question emerged: should mental health chatbots be regulated as medical devices?

DHAC is made up of industry representatives, consumer organizations, scientists, clinicians, and digital health subject matter experts.4 DHAC’s role is to advise FDA by “providing relevant expertise and perspective to improve FDA’s understanding of the benefits, risks, and clinical outcomes” associated with digital health technologies (“DHTs”).5 It provides recommendations to FDA on how to promote innovation while “identifying risks, barriers, or unintended consequences” that could result from attempts by FDA to regulate.

Highlights

For mental health applications of AI in particular, FDA hopes to determine how to: (1) set appropriate evidence thresholds; (2) define expectations for transparency and labeling; and (3) establish postmarket obligations that account for the rate of technological development. These topics were among those discussed at the November DHAC meeting.6

Setting Appropriate Evidence Thresholds

  • DHAC members recognized novel risks associated with large language models (“LLMs”), including hallucinations, confabulations, data drift, and model bias, and noted that tools may fail to detect or deliver therapeutic cues that would be recognized by a professional human therapist.
  • Recommendations included creating a shared way to describe how independently an AI system operates and requiring strong safety measures — such as having a clinician review its use and clear steps for escalating concerns when needed.
  • DHAC members also recommended requiring that before an AI tool goes to market, developers choose appropriate endpoints, test the tool on real-world users, use new ways to track progress, and design studies that clearly show how well the tool engages people and improves symptoms and overall health.
  • While recognizing risks, some hopeful DHAC members noted that AI mental health tools can increase access to support services in settings where clinicians are limited and may help standardize symptom monitoring and follow up.

Defining Expectations for Transparency and Labeling

  • DHAC members emphasized the importance of transparency regarding safety features, crisis-escalation pathways, and clinician involvement where appropriate.
  • Labeling should clearly explain what the device is and is not, including that the AI system is not a human therapist, as well as the device’s intended use, limitations, and prescribing requirements.
  • Labeling should also disclose key information about data use and privacy practices, model limitations, and, where informative, the underlying foundation models used to support the device.

Establishing PostMarket Obligations That Account for the Rate of Technological Development

  • DHAC members discussed the importance of ensuring that meaningful postmarket information about patient use is available to prescribing or overseeing clinicians.
  • DHAC members supported a total product lifecycle approach that includes postmarket monitoring tailored to the pace of AI development and model evolution.
  • Postmarket obligations should include mechanisms to ensure devices continue to perform as intended; avoid unintended shifts in function or use; and collect longitudinal data on engagement, adherence, and symptom trends.

Voices From the Docket

In connection with the November meeting, a docket on Regulations.gov solicited public comments from scientists, clinicians, professional associations, digital health companies, concerned individuals, and others.7 Public comments are critical to an agency’s regulatory process.

Public comments submitted through Regulations.gov focused heavily on crisis identification and response. Clinicians generally opposed the use of AI systems as a substitute for licensed mental health professionals and noted that responsibilities, such as mandated reporting and individualized judgment, cannot be met by AI.

Some digital health developers supported a risk-based framework that distinguishes wellness tools from tools making clinical claims and encouraged FDA to require clear, plain language disclosures. A number of commenters noted that AI tools may improve access in underserved areas if adequate safeguards are in place.

AGG reviewed all 116 comments received through Regulations.gov by the deadline on December 8, 2025. This section highlights only some of the major concerns raised by commenters.

  • Unregulated Practice of Medicine: An AI chatbot cannot meet the professional obligations of a licensed clinician. It cannot interpret context, observe nonverbal cues, or fulfill mandated reporting duties. Positioning a chatbot as a substitute for therapy may constitute unacceptable clinical risk.
  • Inadequate Substitute for Humans: Some believe AI therapy without human involvement lacks the mechanisms that make therapy effective.
  • Patient Safety: Systems that engage users in discussions regarding depression or anxiety must reliably identify suicidal ideation. Without validated escalation pathways, chatbots can introduce new risks rather than reduce them.
  • Data Privacy: Users do not know how their conversations are stored or reused. Given the sensitivity of mental health information, the lack of transparency around training data and retention practices is a significant concern.
  • Hallucinations: Generative AI models can produce incorrect or misleading responses with confidence. For vulnerable users, especially adolescents, hallucinations are potential sources of harm.
  • Bias: Several comments noted that biased training data can lead to biased mental health recommendations. Often, training datasets do not adequately represent diverse patient populations.
  • Need for Uniformity: Clinician groups urge strong FDA leadership to prevent a state-by-state patchwork, noting widespread unregulated apps and calling for uniform federal protections and clear definitions of “therapy” and “digital mental health medical device” (among other terms).8

AGG Observations

  • DHAC’s discussion and the public comments submitted through Regulations.gov identify a set of issues that FDA will need to address in any future guidance or rulemaking. Crisis identification and response, transparency, and ongoing oversight of model performance were raised repeatedly and are likely to become baseline expectations for AI tools that engage with mental health topics.
  • Clear boundaries between wellness features and clinical functions will likely be necessary to avoid uncertainty about when a product is subject to FDA regulation. Without defined parameters, higher risk tools may operate without appropriate oversight and users may misunderstand the nature of the support being offered.
  • Several commenters questioned whether certain intended uses can be offered safely without clinician involvement. These concerns may influence FDA’s evaluation of labeling, conditions of use, and whether professional supervision is necessary for higher risk applications.
  • Privacy concerns appeared across most submissions. Although FDA does not regulate privacy, commenters indicated that transparency about data handling, training data sources, and retention practices may be essential given the sensitivity of mental health information.
  • DHAC’s support for a total product lifecycle approach aligns with the concerns raised in some public comments. Developers should expect regulatory requirements to update procedures, model governance, and postmarket data collection because generative AI systems evolve after deployment.
  • While some commenters described potential access benefits, most tied those benefits to the presence of defined safety and governance measures. FDA will likely need to balance access goals with safeguards that address the risks identified by both DHAC and the commenters.
  • What comes from DHAC’s continued recommendations remains unclear. Industry and healthcare providers should continue making their voices heard to further shape DHAC’s recommendations to FDA. And if you feel paranoid in the meantime, a therapist (or a chatbot?) will be happy to listen — regulated or not.

 

[1] On December 11, 2025, President Donald Trump signed an Executive Order on “Ensuring a National Policy Framework for Artificial Intelligence.” It is too early to tell the impact of this action. Executive orders lack inherent authority to constitute regulations on their own. They are instructions to federal agencies with specific deadlines. Executive Order, Ensuring a National Policy Framework for Artificial Intelligence, https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/.

[2] FDA is also adopting AI internally. Through a Bulletin on December 1, 2025, the agency announced the internal initiative to use agentic AI (tools that autonomously set goals and act on them) to support reviewers and investigators, reinforcing its broader push to modernize. According to FDA Commissioner Marty Makary, the initiative is designed to modernize agency workflows and “put the best possible tools in the hands of our reviewers, scientists and investigators,” while maintaining human oversight to ensure reliable outcomes. See U.S. Food & Drug Admin., FDA Expands Artificial Intelligence Capabilities with Agentic AI Deployment (Dec. 1, 2025), https://content.govdelivery.com/accounts/USFDA/bulletins/3fdcd36?reqfrom.

[3] FDA, Digital Health Advisory Committee Meeting Announcement (Nov. 6, 2025), https://www.fda.gov/advisory-committees/advisory-committee-calendar/november-6-2025-digital-health-advisory-committee-meeting-announcement-11062025.

[4] Advisory Committee; Digital Health Advisory Committee; Addition to List of Standing Committees. 89 FR 13,268 (February 22, 2024), https://www.federalregister.gov/documents/2024/02/22/2024-03618/advisory-committee-digital-health-advisory-committee-addition-to-list-of-standing-committees.

[5] FDA, Digital Health Advisory Committee, https://www.fda.gov/advisory-committees/committees-and-meeting-materials/digital-health-advisory-committee.

[6] FDA, Digital Health Advisory Committee Meeting: Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices – FDA Discussion Questions (Nov. 6, 2025), DHAC November 6, 2025 Discussion Questions.

[7] Digital Health Advisory Committee; Notice of Meeting; Establishment of a Public Docket; Request for Comments—Generative Artificial Intelligence-Enabled Digital Mental Health Medical Devices, 90 Fed. Reg. 44,196–97 (Sept. 12, 2025) (comments due Dec. 8, 2025), https://www.regulations.gov/document/FDA-2025-N-2338-0001.

[8] Undoubtedly, President Trump’s recently signed executive order regarding a national AI policy will come into play here. It is too soon to tell whether the order will successfully thwart state laws.