Regulatory trends in AI enforcement 2025

Updated as of: 24 November 2025

Enforcement against AI companies is ramping up. From data privacy to child safeguarding and AI-washing, here are the top enforcement trends of 2025 and what they reveal about the AI regulatory landscape.

Shutterstock.com/ImageFlow

Regulators have fixed their attention on the AI rollout, intending to ensure the technology is being implemented in line with legal and regulatory standards. Enforcement against AI companies has so far focused on four key areas: data privacy, online safety, intellectual property and consumer protection. 

What’s more, with EU AI Act 2024 coming into full effect in August 2026, companies that provide and distribute AI systems are soon set to face even greater regulatory scrutiny. 

So far, regulators have focused on the AI providers themselves; however, companies relying on third-party AI vendors must also understand the implications for liability or risk facing regulatory or legal scrutiny themselves. Companies cannot plead ignorance if a vendor’s system compromises personal data, infringes IP rights or otherwise violates the law. 

Using personal data for AI training

Several data protection authorities (DPAs) have raised concerns over how AI providers are using personal data for AI training. DeepSeek came under global scrutiny early in 2025, due to the vast amounts of data it collects from users and the fact of it being headquartered in China. Chinese intelligence laws require Chinese-owned companies to submit their data to the Chinese Communist Party if requested, raising serious privacy concerns. 

The Italian Garante, South Korean Personal Information Protection Commission (PIPC) and Berlin DPA were among those to launch investigations and temporarily ban DeepSeek in their jurisdictions. The PIPC issued corrective recommendations to DeepSeek in April, stating that DeepSeek had transferred user data to China and the US without consent. DeepSeek is committed to cooperating on its concerns, the PIPC says.

X is also currently being investigated by multiple authorities over its use of personal data to train its AI Grok. The Canadian Privacy Commissioner (DPC) began investigating X in February, following a complaint alleging violations of Personal Information Protection and Electronic Documents Act 2000.

Meanwhile, the Irish Data Protection Commission, acting as the social media platform’s lead authority in Europe, started investigating X’s compliance with EU General Data Protection Regulation (EU GDPR) in April, over its processing of personal data contained in publicly accessible posts for AI training

This indicates that authorities are paying particular attention to the transparency of AI providers’ personal data processing, whether they are appropriately informing users about how their data is being used and if valid consent has been obtained. 

Online safety

Online safety, particularly risks to children, remains a key concern for regulators monitoring the effects of AI. The scrutiny has focused on AI chatbots in response to fears raised by parents and advocacy groups about risks ranging from severe mental health impacts to exposure to highly inappropriate material.

The Italian Garante issued €5 million (US$6 million) and €15 million (US$17 million) fines against Replika and OpenAI, respectively, for EU GDPR violations, including having insufficient age verification measures.  

The Garante also ordered AI app ClothOff to cease processing Italians’ personal data in October. Clothoff allows users to generate “deep nude” images that portray real people in sexually explicit or even pornographic poses. The regulator concurrently launched an investigation to combat other AI nudity apps, due to the severe risks they pose to fundamental rights and freedoms.

The UK Office of Communication (Ofcom) issued a £50,000 (US$65,000) fine against deepfake nude app Itai Tech Ltd for having insufficient age checks on 20 November. The regulator says it has also launched investigations into 20 other adult content providers. 

In September, the US Federal Trade Commission (FTC) launched an inquiry into AI chatbots acting as companions, focusing on how these services measure, test, and monitor potentially negative impacts on children and teens

Some AI companies have already moved to strengthen child safeguarding measures. Meta announced new parental controls to regulate children’s interactions with AI chatbots, taking effect from early 2026. Parents will be able to prevent their child from opening one-on-one chats with AI characters, block specific chatbots and gain “insights” into their child’s interactions with AI. 

Similarly, OpenAI unveiled improved child safety protocols in September, including a pledge to adopt age prediction, whereby the AI automatically detects a user’s age and tailors its service accordingly. 

Data scraping 

2025 has seen several high-profile legal cases passing through the courts, with IP rights holders accusing AI companies of violating their rights by repurposing content for AI training without consent or renumeration.  

Anthropic agreed in September to pay US$1.5 billion to settle allegations from publishers it had used pirated copies of books scraped from the internet to train its model.

Governments have also begun weighing in on the need to strike a balance between AI innovation and protecting rights holders. The UK government’s consultation on “Copyright and AI" ended in February; responses will inform the UK’s approach to establishing a framework to clarify how AI companies may legally repurpose data scraped from the internet. 

Provisions of EU AI Act 2024 that took effect in February somewhat limit how AI companies may use scraped personal data to train their systems. AI systems that create or expand facial recognition databases through the untargeted scraping of images from the internet or CCTV footage are prohibited under the law. 

Companies should pay attention to the outcome of ongoing court cases that will establish legal precedents around data scraping in different jurisdictions and could guide future trends in regulatory enforcement. 

Consumer protection

Regulators are paying close attention to how AI companies market their products, particularly claims made about the sophistication and capabilities of AI features, to combat so-called AI-washing

The US state of Texas launched an investigation into several AI chatbot platforms, including Meta AI Studio and Character.AI, in August, over allegations that the companies had engaged in deceptive trade practices by claiming to be mental health tools.

In the same month, the FTC filed a complaint against Air AI, alleging that the company and its owners made exaggerated claims about the ability of its AI tools to replace human sales representatives. This follows an FTC crackdown on deceptive AI claims and schemes launched in 2024. 

Meanwhile, the US Department of Justice charged former CEO of AI startup Nate with securities and wire fraud in April, after he allegedly made false and misleading statements about Nate’s AI capabilities. 

These cases underscore the importance of ensuring that any AI-related claims are substantiated. It’s vital that companies thoroughly understand how their services are using AI and are transparent about the limitations. 

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.