As regulators crack down, how can businesses avoid “AI washing”?

Updated as of: 17 June 2025

Businesses risk allegations of fraud and consumer protection violations if they overstate AI capabilities, as regulators increasingly crack down on “AI washing.”

Key takeaways

  • Enforcement and litigation against AI washing is accelerating, with the US DoJ pursuing criminal charges and SEC imposing tough penalties.
  • The EU AI Act 2024 and UK DMCCA 2024 drastically increase companies’ legal liability and risk of facing major fines.
  • Companies should maintain thorough documentation to support AI-related statements. 

Shutterstock.com/Song_about_summer

The use of “AI” as a marketing term is rapidly increasing. During a three-month period in 2024, the UK Advertising Standards Authority (ASA) identified 16,000 unique adverts that referenced AI. This may be because AI-enabled businesses are more likely to be perceived as innovative, forward-thinking, and more attractive to investors. 

The hype around AI technology has given rise to a phenomenon known as “AI washing” – when companies overstate or misrepresent their AI capabilities to present themselves more favourably.

It comes in different forms, from falsely claiming that a product or service is AI-enabled or fully automated, to exaggerating an AI system’s accuracy. AI washing poses significant legal and compliance risks for companies, including shareholder litigation and penalties for misleading commercial claims.

Companies need to be vigilant in ensuring that representations about their AI technologies are accurate, substantiated, and not exaggerated in order to avoid liability for their directors under corporate law,” says Jan Spittka, partner at Clyde & Co. 

Regulators in the US are taking action against AI-washing. The Department of Justice (DoJ) recently charged the former CEO of AI startup Nate with securities and wire fraud, after he allegedly made false and misleading statements about Nate’s AI capabilities. 

Authorities elsewhere are also scrutinising companies’ AI claims. The EU AI Act 2024 introduced enhanced transparency requirements for AI systems, while the UK ASA issued guidance on AI as a marketing term last year.

AI washing could become more tightly regulated in the future, similar to the tough stance global regulators have taken against greenwashing. Lexology PRO considers the legal risks companies should consider and steps they can take to ensure their claims are truthful and compliant.  

Litigation and enforcement are on the rise

AI washing risks allegations of false advertising, securities fraud and breaches of consumer protection law.

Here’s how authorities in the US, EU and UK are targeting companies participating in AI washing. 

US

“As US regulators and private parties become more sophisticated in their understanding of how AI works, we can expect an increased focus on companies about the way they represent the nature and extent of their reliance on AI to sell a product or a service,” says Joel Cohen, partner at White & Case.

As of May 2025, 46 AI-related securities class actions have been filed in the US since 2020, most of which involve allegations of AI washing. Much of the regulatory action has focused on alleged investor fraud or breaches of consumer protection law.

Recent US enforcement action

The Securities and Exchange Commission (SEC) announced a US$310,000 penalty against the owner of Rimar Capital and an employee of the company in October 2024, after it found that the investment adviser had raised nearly US$4 million based on allegedly exaggerated AI claims. 

The Federal Trade Commission (FTC) settled with Workado, which markets and sells AI detection tools, in April 2025 to resolve allegations that the company made false or unsubstantiated claims about the accuracy of its model, breaching Section 5 of Federal Trade Commission Act 1914.

In September 2024, the FTC acted against five companies alleged to have participated in AI washing; one company agreed to pay US$193,000 to settle the allegations, while others were ordered to cease their non-compliant conduct. 

EU 

The EU AI Act 2024 doesn’t specifically refer to AI washing, but it does impose increased transparency obligations on AI developers and distributors. This places a legal burden on entities involved in AI deployment to be honest and transparent about when and how certain tools are being used. 

Providers of AI systems that generate synthetic audio, image, video or text must ensure all outputs are marked and detectable as AI-generated. Deployers of such systems must disclose when content has been generated or manipulated by AI.

There are also criminal law risks associated with AI washing within the EU. Companies can be held liable for fraud or false advertising under member state laws.

UK  

Authorities in the UK are taking a robust stance against false advertising more broadly, drastically increasing companies’ enforcement risk. 

The newly enacted UK Digital Markets, Competition and Consumers Act 2024 (DMCCA 2024) grants the Competition and Markets Authority (CMA) far-reaching powers to directly enforce consumer protection law, including complaints about false or misleading claims, which encompasses AI washing.

Under DMCCA 2024, the CMA may issue fines of up to 10% of companies’ annual global revenue.

Meanwhile, the ASA has acted in circumstances where AI washing equates to false advertising. The regulator decided against an Instagram ad for mobile app Pixelup, which claimed to “enhance” photos using AI. The ad – found to be in breach of the CAP Code prohibition against misleading advertising – was banned, with Pixelup ordered not to exaggerate AI performance in the future.

The regulator has published guidance on using AI terminology in marketing, to help companies avoid misleading consumers with AI claims. 

Don’t exaggerate...and other tips for avoiding AI washing

Given the severe and far-reaching consequences of AI washing, its vital companies ensure their claims are accurate, substantiated and clear. 

“Companies should avoid making broad and sweeping claims about AI without clarification. Communications need to be consistent and accurate. Boards should regularly review public-facing statements and consider implementing company-wide policies and training which specifically address making assertions about AI,” according to DWF partner Tom Mungovan.

White & Case’s Cohen says if representations about how the company uses AI feel exaggerated, “that’s going to attract the interest of investors, consumers, and regulators if the company’s performance doesn’t meet market expectations.”

Here are some practical steps companies can take to avoid allegations of AI washing. 

Ensure claims are substantiated 

Claims about AI’s abilities must be thoroughly substantiated, with up-to-date documentation to support statements. Some laws, like the EU AI Act 2024, legally oblige companies to maintain certain documentation, logs and assessments to ensure transparency about AI use and capabilities. 

Audit AI statements 

AI washing can appear in a variety of contexts, including investor-facing settings, such as within presentations, pitches or forecasts. It could also occur in annual reports, company filings, product descriptions, marketing and social media content. 

All these statements must be vetted to ensure they reflect technical realities. 

“Companies should view AI claims through the same lens with which they view all marketing and advertising claims. Is this statement true and accurate? Could it mislead a customer?  Once satisfied that the claim is not false or misleading, companies should take care to maintain records substantiating the claim – this is the best defence to an allegation of AI-washing,” says Marissa Caldwell, partner at McCarthy Tetrault. 

If using technical language or “buzzwords” to describe AI capabilities, the exact meaning should be specified. Avoid vague and sweeping claims. 

Carry out regular performance checks  

All AI systems should undergo rigorous performance testing before being put to market or implemented internally. This is to ensure they are performing consistently and as expected. 

The testing should be repeated regularly once the AI is in use, so any claims about accuracy or capabilities are valid and up to date. 

If AI performance changes, any claims related to it should be reviewed and withdrawn or updated as necessary. The FTC's enforcement against Workado evidences the risks companies face if they fail to ensure their AI's capabilities align with their marketing claims

Develop clear policies and training 

Companies should develop training and policies to help employees understand the risks associated with AI washing and how it can be avoided. This is particularly important for staff within sales and marketing, who are most likely to be making public-facing statements about AI performance.

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.