Nationwide Wave Ahead? New York’s Push for Private Lawsuits Over Harms Caused by AI Chatbot
| Footnotes for this article are available at the end of this page. |
Takeaways
- Private Right of Action: New York’s S7263 bill bars organizations from using artificial intelligence (“AI”) chatbots to give substantive guidance that would amount to the unauthorized practice of a licensed profession. The proposal would allow individuals harmed by AI chatbot outputs to go to court and sue operators directly for damages.
- Patchwork of State Regulations: As New York and other states advance new AI chatbot laws, operators, users, and healthcare providers will face a complex patchwork of requirements across states. With regulations evolving rapidly across states, organizations must evaluate their risk management and compliance strategies to address ambiguous standards and avoid unintended legal exposure.
- Looming Scope and Compliance Risks: Emerging proposals like New York’s ambiguous restriction on bots providing “substantive” advice raise concerns about enforceability, may chill innovation, and create over‑censorship and compliance challenges for mixed‑use tools such as chatbots in patient portals or intake workflows.
Chatbot regulation is top of mind for many these days, especially given recent headlines involving minors and incidents of self-harm related to chatbot interactions. In 2026 alone, lawmakers have advanced nearly 100 state measures and three federal proposals.1 In this crowded and fast‑moving landscape, varied state approaches will generate significant uncertainty for chatbot operators and healthcare providers.
New York lawmakers are taking this trend a step further, advancing legislation that would open the courthouse doors to individuals harmed by AI chatbot outputs. The proposal, Senate Bill S7263, prohibits organizations that deploy AI chatbots from allowing those systems to provide substantive guidance that, if delivered by a person, would amount to the unauthorized practice of a licensed profession, including medicine, law, dentistry, psychology, and social work.2 The bill attempts to combine a ban on professional advice by AI chatbots with a private right of action, allowing users to directly sue entities that deploy chatbots when their recommendations cause harm.
On March 4, 2026, the New York State Senate Internet and Technology Committee voted unanimously to advance S7263 to the Senate floor. While the legislation remains in its infancy, its progress signals increasing momentum for state-level regulation of AI chatbots.
As New York and other states advance new AI chatbot laws, operators, users, and healthcare providers will face a complex patchwork of requirements across states. This variation at the state level creates significant uncertainty: each state could have its own unique definitions, enforcement mechanisms, and liability exposures. It is essential for organizations leveraging these technologies to critically evaluate the safeguards around AI chatbot content and ensure their systems do not cross professional or ethical boundaries.
AI chatbot regulation directly concerns healthcare providers. Hospitals, health systems, digital health platforms, and similar organizations will need to carefully consider how AI chatbots are integrated into clinical and administrative workflows. State regulations range from mandatory disclosures and consent requirements to broader transparency and safety regimes. Navigating these shifting legal standards is crucial for healthcare providers to safeguard patients and minimize legal and operational risks.
The Proposal
Sponsored by State Senator Kristen Gonzalez, S7263 prohibits covered proprietors from permitting their chatbots to provide “substantive” guidance that would require a professional license if a person were to give the same advice. “Proprietors” are defined as any person, business, or entity that owns, operates, or uses a chatbot to interact with users, but does not include third-party developers who license chatbots to others. The proposal prohibits not only advice from a chatbot, but broadly extends to any “substantive response, information, or advice” that would constitute the unauthorized practice or use of a professional license.
The proposal also imposes a clear disclosure requirement for proprietors to tell users that they are interacting with an AI system rather than a human. However, the bill makes it clear that disclaimers do not insulate proprietors from liability.
An individual may bring a civil action to recover damages if (1) the chatbot provides a substantive response that requires a professional license; or (2) the chatbot does not provide proper notice. The proposal would take effect 90 days after the governor signs the bill into law. Notably, the legislation lacks any fiscal note, despite the potential impact on the courts.
The proposal’s broad scope raises concerns about enforceability and unintended consequences. Industry and civil liberties groups voice how it could chill innovation and lead to censorship. The bill’s prohibition on “substantive” advice creates ambiguity for proprietors attempting to interpret what constitutes the unauthorized practice of a profession. That ambiguity will be especially acute for mixed‑use tools: for example, chatbots embedded in patient portals or intake workflows. In these cases, licensed professionals remain involved in the user’s care, but AI still surfaces tailored information to users.
If S7263 passes, chatbot operators must revisit their risk assessments and consider whether certain features require redesign or internal controls.
New York’s AI Companion Law
S7263 is part of a broader slate of legislation in New York, including an AI companion law that took effect November 5, 2025.3 This law requires AI companion operators to implement safety protocols to detect suicidal ideation or self‑harm. “AI companion operators” are defined as AI systems that simulate a sustained human relationship by (1) retaining information from prior interactions; (2) asking unprompted emotion-based questions; and (3) sustaining an ongoing dialogue concerning personal matters. Operators must route users to crisis services and regularly remind users they are interacting with AI (including at session start and at least every three hours of continuous use). The state attorney general can enforce violations, with civil penalties directed to suicide prevention programs.
Another proposal currently advancing in the New York legislature, “The Prohibition of Unsafe Chatbot Features for Minors” (S9051), builds on the AI Companion Law.4 S9051 would prohibit chatbots from offering services to minors when the technology contains “unsafe chatbot features.” These features include design elements that mimic human relationships, reuse sensitive personal data, undermine safety guardrails, encourage secrecy or self‑harm, or generate sexually explicit or exploitative content for minors.
S9051 also creates a private right of action for harmed users to sue AI companion operators for damages. The proposal requires AI companion operators to disprove the presumption that the chatbot caused or contributed to the injury. Placing the burden on operators significantly raises the stakes and potential costs of litigation.
State and Federal Efforts
Other states are rapidly introducing legislation to address AI chatbot risks, often focusing on vulnerable populations and the limits of disclosure‑based safeguards.
California’s new Companion Chatbot Law also authorizes private suits by users harmed when operators violate disclosure, safety, or reporting requirements, including statutory damages of at least $1,000 per violation plus attorneys’ fees.5 California bars operators from invoking an “autonomous harm” defense to avoid civil liability. In Michigan, the “Kids Over Clicks” initiative is a package of bills aimed to protect children from social media exploitation.6 Among other goals, the initiative would prohibit AI companion chatbots from having child users.
A bipartisan consensus continues to grow that chatbots require more robust regulation, with both Republican- and Democratic-led initiatives emerging at the state and federal levels. In Florida, GOP lawmakers have backed bills focused on child safety-that address AI and social media harms.7 In the U.S. Senate, Republican Senator Josh Hawley co‑sponsored AI legislation with Democrats that imposes stricter rules and liability for high‑risk AI systems, including chatbots.8
Observations
State efforts are unfolding against a shifting federal backdrop. On December 11, 2025, President Donald Trump signed Executive Order 14364 directing agencies to advance a “uniform national policy framework for AI.”9 The executive order directs agencies to develop legislative recommendations for a federal scheme that could limit more aggressive state approaches. The executive order directs the attorney general to create an AI Litigation Task Force to challenge state AI laws inconsistent with federal policies.10 Whether these federal initiatives curb, reshape, or simply coexist with state experiments like S7263 remains an open question.
S7263 and similar legislation across the country target companies, institutions, and government entities that use chatbots to interact with the public. By exposing deployers to direct lawsuits for unauthorized professional advice, chatbot errors could turn into actionable claims. Chatbot operators and providers should pay close attention to the quickly evolving state and federal landscape. Hospitals, health systems, digital health platforms, and other organizations deploying these tools need to carefully consider the guardrails in place around content delivered by AI chatbots.
For assistance with AI chatbot regulation in the healthcare industry, please contact a member of AGG’s Healthcare practice.
[1] Justine Gluck, The Chatbot Moment: Mapping the Emerging 2026 U.S. Chatbot Legislative Landscape (Mar. 12, 2026), https://fpf.org/blog/the-chatbot-moment-mapping-the-emerging-2026-u-s-chatbot-legislative-landscape/.
[2] S. 7263, 2025–2026 Reg. Sess. (N.Y. 2025).
[3] N.Y. Gen. Bus. Law art. 47 (McKinney 2025).
[4] S. 9051, 2025–2026 Reg. Sess. (N.Y. 2025).
[5] Cal. Bus. & Prof. Code § 22601 (West 2025).
[6] Michigan’s “Kids Over Clicks” legislative package consists of Senate Bills 757 through 760. See Mich. S.B. 757, 102d Leg., Reg. Sess. (2026); Mich. S.B. 758, 102d Leg., Reg. Sess. (2026); Mich. S.B. 759, 102d Leg., Reg. Sess. (2026); Mich. S.B. 760, 102d Leg., Reg. Sess. (2026).
[7] Executive Office of the Governor, Press Release, “Governor Ron DeSantis Announces Proposal for Citizen Bill of Rights for Artificial Intelligence” (Jan. 2025), https://www.flgov.com/eog/news/press/2025/governor-ron-desantis-announces-proposal-citizen-bill-rights-artificial.
[8] Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act of 2025, S. ___, 119th Cong. (2025).
[9] Exec. Order No. 14,364, 90 Fed. Reg. 57,349 (Dec. 10, 2025).
[10] Memorandum from the Att’y Gen. to All Dep’t of Justice Emps. (Jan. 9, 2026), Artificial Intelligence Litigation Task Force, https://www.justice.gov/ag/media/1422986/dl?inline.
- Andrew Tsui
Partner
- Aditya Krishnaswamy
Associate