How are AI companies responding to child safety concerns?

Updated as of: 21 October 2025

AI chatbots are surging in popularity among teens, but safety concerns are drawing regulatory scrutiny. How are AI companies acting to prevent these risks, and what are regulators doing to hold them accountable?

Shutterstock.com/Ole.CNX

Over 70% of teenagers have engaged with AI companion chatbots, and half use them regularly, according to recent research from the University of Chicago. 

The growing adoption has sparked concerns about the risks AI, particularly chatbots, pose to children; from exposure to inappropriate and harmful material to chatbots having a severe detrimental effect on children’s mental health. 

Regulators are also intensifying their focus on AI companies’ child safeguarding efforts. The US Federal Trade Commission (FTC) launched an inquiry into AI chatbots acting as companions in September 2025, ordering seven providers to submit information about how they monitor and mitigate potential negative impacts on children.

Some companies have taken action. Both Meta and OpenAI recently announced plans to strengthen safety guardrails on their chatbot services, from offering more granular parental controls to age prediction tools. 

This follows heated criticism directed at Meta in 2024, after an internal policy document revealed that its AI chatbot was permitted to “engage a child in conversations that are romantic or sensual.” Other AI providers are the subject of lawsuits filed by parents alleging that chatbots encouraged their children to attempt suicide or participate in other risky behaviours. 

Lexology PRO considers AI companies’ evolving approach to children’s safety, as well as the increasing regulatory scrutiny around children’s relationships with AI.

What safety steps have companies taken? 

Last week, Meta announced new parental controls to regulate children’s interactions with AI chatbots. From early 2026, parents will be able to prevent their child from opening one-on-one chats with AI characters; block specific chatbots and gain “insights” into their child’s interactions with AI chatbots. 

Meta also stated that teenagers’ AI chats will be restricted to PG-13 by default; these settings may only be altered with parental consent. 

OpenAI unveiled improved child safety protocols in September. The company pledged to move towards age prediction, whereby the AI automatically detects a user’s age and tailors its service accordingly. It will also introduce a greater range of parental controls, including the ability for parents to link their account with their teenager’s. 

OpenAI founder, Sam Altman even indicated that the app could start reaching out to authorities when young users talk seriously about suicide, in response to concerns that as many as 1,500 people a week may be talking to the chatbot about taking their own lives.

Character.ai is another chatbot provider that has tightened up on child safeguarding in the wake of criticism. The platform implemented measures, including a pop-up that directs users who mention self-harm or suicide to the National Suicide Prevention Lifeline. It also made changes to its model to reduce the likelihood of under 18s encountering sensitive or suggestive content, and offers parents the option to receive a weekly email about their child’s activity on the app.

What risks does AI present to children? 

78% of the public would like to see new generative AI products subject to child safety checks, even if this delays the products’ release, UK children’s charity NSPCC found. 

This underscores the apparent lack of public trust in AI companies’ current efforts to safeguard children. The potential risks associated with the technology are manifold, with some parents citing severe impacts on their children’s mental and physical wellbeing, alongside accounts of children being exposed to highly disturbing content.

Mental health

Concerns have been raised that AI chatbots potentially harm children’s mental health by leading to addiction, dependency and damaging young people’s ability to form normal relationships with their peers, leading to social withdrawal.

There is also a growing number of lawsuits that have been filed by parents alleging that AI chatbots encouraged their child to commit suicide or participate in other harmful, even violent behaviours.  

Privacy risks 

AI systems are trained and refined using vast datasets, often including sensitive or personal data. When children interact with chatbots, they also generate large quantities of behavioural data. 

There are concerns around how this data is being processed and repurposed. For example, the UK Information Commissioner’s Office (ICO) issued a preliminary enforcement against Snap, claiming the company had “failed to adequately identify and assess the risks” to My AI users, particularly teenagers in relation to targeted advertising.

Ultimately, the ICO opted to work with Snap to bring its operations into compliance, rather than issuing a final enforcement. However, the case highlights the regulatory pressure on AI companies to treat children’s personal data with care. 

Exposure to inappropriate material 

One in five Spanish minors report AI-generated nude imagery being shared without their consent, according to the charity Save The Children in 2025.

AI-generated child sexual abuse material (CSAM) or non-consensual nude imagery are among the most serious forms of harmful content children risk being exposed to on AI services. 

Reports claim that Character.ai potentially exposed children to sexual grooming, exploitation, emotional manipulation, and other acute harm.

This included AI chatbots depicting popular film and television characters coaching children on how to hide medication from their parents, confessing romantic feelings and using racist language to a child. 

Google’s Gemini also reportedly shared inappropriate and unsafe material with children, including material related to sex, drugs, alcohol, and unsafe mental health "advice," despite safety features. 

Targeted by harmful genAI content

There have been reports of AI-generated material being used to extort children, usually for financial purposes. Criminals either use AI-generated sexually exploitative images, or threaten to create such images, as a form of blackmail.

Malicious actors also use AI-generated images to build fake profiles, in order to convince young people to send them money or for other forms of manipulation.

Current regulatory landscape

The measures put forward by AI companies are primarily aimed at limiting children’s exposure to harmful AI-generated material by introducing age restrictions as well as allowing parents greater oversight of their children’s AI activities.

However, activist groups and regulators are keen for AI companies to go further to tackle the most serious risks, like AI-generated CSAM.

In February 2025, the UK became the first country in the world to create new AI sexual abuse offences to protect children from predators generating AI CSAM. The Home Office has also pledged to introduce a specific offence, meaning that individuals who operate websites where CSAM is shared can face up to 10 years in prison. 

The government urged tech companies to action, “I would implore Big Tech to take seriously its responsibility to protect children and not provide safe spaces for this offending,” Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips stated. 

Meanwhile, in the US, legislation was signed into California law on 13 October 2025, establishing stricter safeguards for AI chatbots – the first so-called “chatbot safety law.” The law mandates age verification, protocols to address suicide and self-harm, warning labels and stronger penalties for those who profit from illegal deepfakes. 

Children’s online safety, in relation to AI and beyond, is also a clear priority in the EU. The Commission announced on 10 October that it has sent information requests to Snapchat, YouTube, Apple App Store and Google Play to gain an understanding of these companies’ measures to protect minors under EU Digital Services Act 2022

What’s more, EU AI Act 2024 includes provisions to prohibit AI systems that present an “unacceptable” risk of cognitive behavioural manipulation affecting specific vulnerable groups, with age listed as an axis of vulnerability. 

However, critics of EU AI Act 2024 argue it lacks sufficient provisions to address children’s unique vulnerabilities and the complex harms presented by AI deepfakes.

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.