AI Chatbot Compliance: Key Legal Risks and Regulatory Considerations for Businesses in 2026
Key Takeaways
- AI chatbot deployment is now a multi-regulatory compliance issue. Businesses must address overlapping obligations under data privacy laws, FTC/UDAP statutes, and emerging state AI transparency requirements when using chatbots.
- Chatbot outputs create direct consumer protection liability. Statements generated by AI are treated as company representations and can trigger FTC enforcement and state deceptive practices claims if inaccurate or unsubstantiated.
- High-risk use cases are drawing increased scrutiny. Regulators, including the FTC and FDA, as well as state agencies, are actively evaluating chatbot safety, disclosures, and potential harms, meaning likely expansion of enforcement and rulemaking in these areas.
Chatbots, including those powered by artificial intelligence (“AI”), are growing in popularity, but they must be deployed in a compliant manner to avoid creating more problems than they solve. In this article, we explore some key compliance considerations businesses should address before implementing these tools.
Data Privacy & Security
Notice & Consent
A chatbot is often a new data collection channel for a company. Consequently, updates may be needed to a company’s privacy policy or notice at collection to reflect this new data processing activity. Companies should also be mindful of the principles of data minimization (i.e., not collecting more data than necessary) and purpose limitation (i.e., not using data for undisclosed purposes or purposes inconsistent with consumers’ reasonable expectations) with respect to the retention and use of personal data collected via chatbots. Plus, if sensitive personal data is likely to be submitted through a chatbot, companies will need to consider whether consent to such data collection is required by law.
Third-Party Management
To the extent a chatbot is provided by a third-party vendor or built upon a third party’s AI model, companies should impose limitations on the third party’s ability to access and use the data that is input into and used to prompt the chatbot by requiring the third party to maintain strict privacy and security controls around the collection, storage, and disclosure of such data. Ensuring robust contractual controls are in place with third-party providers is also a prudent mitigation measure with respect to claims brought under state wiretapping statutes such as California Invasion of Privacy Act (“CIPA”), which can allege that chatbots record conversations and give third-party service providers access to communications without consent.
Consumer Protection
Misleading Statements
Statements made by a website’s chatbot are treated as representations of the company itself — meaning that failure to uphold such statements can result in unfair and deceptive trade practice liability under state and federal Unfair or Deceptive Acts or Practices (“UDAP”) statutes. This risk exists on a sliding scale wherein chatbots that are trained on a discrete dataset (e.g., customer service FAQs) are less prone to making inaccurate or misleading statements than generative AI chatbots that sit on a large language model (“LLM”) and are intended to generate their own responses. The FTC has repeatedly taken the position that chat responses promising specific results from a product (e.g., financial returns, health benefits, etc.) or guaranteeing eligibility or refunds, can be deceptive or misleading if not substantiated.
AI Transparency Requirements
An emerging body of state law requires that AI chatbots disclose their AI nature so as to place the user on notice that they are not, in fact, communicating with a human. Laws in California, Colorado, Maine, New Jersey, Texas, and Utah impose varying duties to disclose that AI is conducting a communication. Some states require such disclosure at the outset, while others only require disclosure if the communication is intended to entice a sale, if the user inquires whether they are speaking with a human, or if the communication occurs in a regulated occupation (e.g., healthcare, finance).
High-Risk Use Case Examples
Children
In addition to privacy considerations that apply to the collection of children and teens’ personal data, AI chatbot use by children and teens triggers concerns regarding emotional influence or manipulation, exposure to inappropriate or harmful content, and the inability of younger users to distinguish between humans and bots.
In September 2025, the FTC launched an inquiry into companies operating consumer-facing AI chatbots, seeking information about how these entities use chatbots to interact with children and teens, including chatbots that purport to be “companions.” Last year, California became the first state to pass a law mandating specific safeguards for AI companion chatbots used by minors, including, for example, a requirement to remind users at least every three hours to take a break and that the chatbot is AI-generated. Additionally, some companion chatbot laws apply to users of all ages.
Mental Health
Chatbots engaging in healthcare-related communications raise unique compliance concerns, including, for example, a risk of engaging in the unlicensed practice of medicine. In particular, chatbots engaging with users on sensitive mental health topics have been a topic of concern for policymakers.
The U.S. Food and Drug Administration (“FDA”) Digital Health Advisory Committee has engaged in discussions regarding regulating generative AI-enabled digital mental health medical devices, such as chatbot “therapists.” The committee has discussed concerns over the safety of certain AI-enabled devices and chatbots that look to deliver therapeutic content, diagnose mental health conditions, or serve as a substitute for a mental healthcare provider. Thus, it is possible that the FDA will look to implement additional oversight and regulation targeted at generative AI mental health chatbots.
In the absence of existing comprehensive federal regulatory oversight, states have implemented a spectrum of laws to specifically regulate the use of AI-powered chatbots in mental healthcare. Some states have taken a hard stance virtually prohibiting mental health chatbots, while other states have adopted a variety of protections designed to enhance transparency and user safety. For example, Illinois prohibits providing, advertising, or otherwise offering therapy or psychotherapy services to the public, including through the use of AI, unless the therapy or psychotherapy services are conducted by a licensed professional. Nevada prohibits AI systems from representing themselves as a doctor, mental health provider, or therapist or furnishing services that a user would consider to be professional mental health services. California and New York require all AI companion chatbots to contain protocols to reasonably detect and address users’ self-harm ideations.
Conclusion
In short, chatbots are useful tools, but they can introduce legal risk. Businesses should ensure that any AI-powered chatbots they deploy undergo proper legal review and, where appropriate, are vetted through the business’ vendor due diligence, privacy impact assessment, and artificial intelligence governance processes.
For more information on AI chatbot compliance, please contact AGG Healthcare partner Charmaine Mech Aguirre or Privacy & Cybersecurity partner Erin Doyle.
- Erin E. Doyle
Partner
- Charmaine Mech Aguirre
Partner

