From therapy to liability: the legal and regulatory risks of AI mental health chatbots

Updated as of: 25 November 2025

Generative AI chatbots are increasingly being used as mental health tools, but a lack of regulatory frameworks presents legal, regulatory, and reputational risks for businesses.

Key takeaways

  • US lawsuits have alleged that generative AI chatbots were emotionally manipulative and, in some instances, assisted in suicide attempts.
  • The US FDA is evaluating generative AI-enabled mental health tools.
  • Multiple state laws impose consent, disclosure, and advertising requirements on AI mental health providers.

Shutterstock.com/ArmadilloPhotograp

Mental health chatbots face litigation

Several lawsuits in the US alleging that AI chatbots contributed to youth suicides are forcing regulators to act, raising questions about the role of these tools in mental health care.

Many of these chatbots weren’t built for mental health support at all. LLMs like ChatGPT, Pi, and Snapchat’s My AI chatbot were created for work, education, general conversation or entertainment purposes. 

“If an AI chatbot offers a diagnosis or specific treatment plan without clinician oversight, it may violate state medical practice acts,” said Holt Law founder David Holt. “There is also growing litigation regarding negligence in crisis management—specifically, whether the AI failed to recognize suicidality or ‘hand off’ a user to human help.”

Just last month, seven lawsuits were filed in California state courts alleging assisted suicide and negligence claims – amongst other claims – against OpenAI. Character.AI has also faced scrutiny. In September 2025, three lawsuits were brought in federal courts in Colorado and New York against the company for allegedly designing and marketing AI chatbots that manipulated minors. Plaintiffs also alleged Character.AI exposed them to sexual content, isolated them from family and friends, and inflicted psychological harm.

Goodwin partner Jonathan Ishee said it’s still unclear what a company’s obligation is if an AI provides users with misleading information.

“I think the jury's still out legally as to what that looks like,” said Ishee. “If you're a business using AI, you need to put that in your terms of service, your privacy policy [or] your terms of use that you give people notice that they're responsible and that this is not a foolproof technology.”

The litigation comes as chatbots usage amongst teenagers has skyrocketed: a JAMA Network Open study found that almost 75% of US teenagers aged 13 to 17 years have used AI chatbots at least once. 

Adults, too, are turning to chatbots to fill gaps in traditional care. A 2024 study from the National Library of Medicine showed that approximately 28% of surveyed adults used AI for quick support and as a personal therapist. Earlier this year, research conducted by Sentio University found that ChatGPT could be the largest provider of mental health support in the US, with 49% of surveyed participants who self-reported mental health conditions saying they use LLMs for mental health support. This trend coincides with a broader mental health crisis in the US, with nearly 60 million adults reportedly living with a diagnosed mental illness.

FDA tackles AI-enabled digital mental health devices

The demand for easily accessible mental health support has prompted the US Food and Drug Administration (FDA) to scrutinise AI-enabled digital mental health medical devices – many of which are serving as “AI therapists” without being designed for that purpose.

On 6 November 2025, the FDA’s Digital Health Advisory Committee held its second public meeting on generative AI-enabled digital mental health medical devices, weighing the benefits and risks of this technology. Public comments must be submitted by 8 December 2025.

Most large-language models (LLM) were created as general-purpose tools, but the FDA generally considers software used to diagnose, treat, or manage a health condition to be a Software as a Medical Device (SaMD). It also provides a list of authorised AI-enabled medical devices under the SaMD section of its website

The FDA has approved more than 1,200 AI-enabled medical devices, but the agency has confirmed that none involve generative AI for mental health uses. Chatbots are filling this purpose but without clear regulatory protections. Public commentary so far has revealed that whilst chatbots may serve a helpful purpose, like encouraging the use of reflective prompts, several problems remain – including lack of robust scientific evidenceinconsistent performance, and over-reliance on chatbots. 

Following the public meeting, the FDA explained in its summary that developing AI for digital mental health medical devices presents distinct challenges. “The device may confabulate, provide inappropriate or biased content, fail to relay important medical information, or decline in model accuracy (data drift). A patient may misinterpret device outputs or become more symptomatic with device use,” the FDA wrote.

Holts also warned that safeguards must address user psychology.

“To address the ‛anthropomorphism’ risk—where users form emotional bonds with the AI—companies should implement training protocols that force the AI to periodically remind the user of its artificial nature. Companies must also validate that their models are non-deterministic enough to be conversational, but strictly deterministic (fact-based) when providing medical facts,” said Holt.

Experts have commented that one of the takeaways from the FDA's discussion is that the  pre-market 510(k) approval pathway for medical devices – which requires manufacturers to demonstrate their device is substantially equivalent to a device already on the market – may not be adequate. Instead, de novo classification – intended for low-risk devices without a legally marketed predicate device – or premarket approval – intended for high-risk medical devices – may be more appropriate.

Regardless, the agency noted that human oversight via a “Human-in-the-Loop" workflow may be needed for safety. “Human oversight combined with fostering digital literacy among patients and providing robust HCP (health care provider) training may help prevent overreliance on AI-generated outputs,” the FDA said.

“The FDA’s discussion really underscored that training AI for mental health is about both accuracy and safety,” said Manatt Phelps and Phillips partner Randi Seigel. “Companies need to start with a clear risk framework: what’s the intended use, and what could go wrong? From there, it’s about using clinically sound, diverse and unbiased data, building in safeguards against things like hallucinations, and planning for ongoing monitoring.” 

The agency plans to closely monitor the technology after the AI has been deployed, including tracking product usage and performance drift – the gradual decline in accuracy as real-world input changes – and developing new tools to assess data training.

States seek to restrict AI use in mental health care

Whilst the FDA is considering how best to establish a federal-level regulatory framework, some US states are enacting legislation regarding mental health chatbots. In 2025, Illinois enacted the Wellness and Oversight for Psychological Resources Act 2025 and Nevada and Utah similarly enacted legislation restricting the use of generative AI in mental health services.

The Wellness and Oversight for Psychological Resources Act requires that therapy and psychotherapy services be provided only by qualified, licensed, or certified professionals recognized by the state. Under the Illinois measure, a “licensed professional maintains full responsibility for all interactions, outputs, and data use associated with the [AI] system.” That law also requires licensed professionals to obtain consent for the use of AI to record or transcribe patient sessions. Licensed professionals must also explain to patients the specific purpose of the AI that will be used.

Nevada’s law prohibits mental and behavioural health care providers from using AI to directly provide mental and behavioural care to patients, but it can be used for administrative matters. AI providers are also prohibited from making any representations that an AI system can provide professional mental behaviour health care or simulate human conversation, amongst other requirements.

Utah’s legislation also sets restrictions on the use of mental health chat bots. Specifically, Utah banned mental health chatbot suppliers from selling or sharing Utah users’ “individually identifiable health information” or their inputs. Mental health chatbots are also prohibited from advertising products or services in conversations with Utah users unless the chatbot identifies it as an advertisement and discloses any sponsorships, or other affiliations.

Mental health chatbot suppliers must also “clearly and conspicuously” disclose that the chatbot is AI and not a human before Utah users access the chatbot. That notice is also required when a Utah user accesses the chatbot after seven days of inactivity and any time a Utah user asks the chatbot whether AI is being used.

Mental health chatbot investigations are on the rise

In addition to states enacting legislation, US government agencies are also scrutinising the AI mental health chatbot space.

In August 2025, the Texas Attorney General’s Office announced it was investigating Meta and Chracter.AI for allegedly misleading children with deceptive AI-generated mental health services. 

The AG’s announcement states that “These platforms may be utilized by vulnerable individuals, including children, and can present themselves as professional therapeutic tools, despite lacking proper medical credentials or oversight.” 

The US Federal Trade Commission (FTC) also issued orders to Character.AI, Meta, Alphabet, OpenAI, Snap, and X in September 2025 regarding their “companion products.” The FTC demanded details regarding how the technology companies process user inputs, share user data with parties, monitor for negative impacts, and other matters.

According to an order tofile a special report provided by the FTC, the regulator seeks insights on companies’ “mitigation measures” that include automated referrals to third parties related to substance abuse, suicide prevention, or mental health.”

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas, including enforcement dates, reporting deadlines and changes to regulations.    

Track the latest product regulation updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool.

Stay up-to-date with the latest developments by following the USA IT & data protection hub.