As AI-driven abuse against women rises, regulators worldwide are cracking down. Ofcom urges platforms to go beyond compliance: lead on women’s safety or face mounting scrutiny.

Shutterstock.com/Ground Picture
New guidance published by the UK Office of Communication (Ofcom) this week sets out practical safety measures online platforms can implement to better protect women and girls online. These measures go above and beyond companies’ legal duties under UK Online Safety Act 2023 (OSA 2023), as regulators grow increasingly concerned about how new technologies are being used to facilitate abuse.
Women and girls online face serious, gender-specific harms, including stalking, coercive control and intimate image abuse, Ofcom says.
38% of women online have personally experienced online violence, and 85% say they have witnessed digital violence against others, according to one global study.
From de-monetising content promoting misogynistic abuse to setting volume limits on posts to help prevent mass-posting of abuse, the measures recommended by Ofcom are far-ranging, aimed at addressed the specific risks faced by women and girls online.
Lexology PRO unpacks Ofcom’s new guidance, as regulators and lawmakers around the world consider whether more needs to be done to prevent gender-based online abuse.
Regulators target deepfake and nudity apps worldwide
Ofcom’s guidance follows days after the regulator issued a £50,000 (US$65,000) fine against deepfake nude app Itai Tech Ltd, for insufficient age checks. It has also launched investigations into 20 other adult content providers.
AI image abuse disproportionately affects women and girls – 99% of deepfake image abuse depicts women, Ofcom states. The UK is not the only jurisdiction concerned about misogynistic abuse, particularly where AI is involved.
The Italian Garante ordered the ClothOff app to cease processing personal data in October and concurrently launched an investigation to combat other AI nudity apps, due to the severe risks they pose to fundamental rights and freedoms, particularly for women.
South Korea is also cracking down, with amendments to the law in 2024 that criminalised possessing and viewing sexually explicit deepfake images and videos, punishable with prison terms of up to three years and maximum fines of 30 million won (US$22,600). Previously, under South Korea’s Sexual Violence Prevention and Victims Protection Act, only generating such imagery was an offence.
What’s the expectation on platforms?
Ofcom’s latest industry guidance focuses on additional practical measures online platforms can implement to enhance their online safety programmes, with particular emphasis on preventing misogynistic abuse.
Under OSA 2023, platforms are legally obliged to remove and prevent content that is illegal from circulating on their services, which includes nonconsensual deepfake nude imagery, harassment, stalking and controlling or coercive behaviour.
All user-to-user services must put in place systems and processes to remove this content when it is flagged to them. OSA 2023 also mandates Ofcom to develop guidance setting out the safety measures that platforms could adopt to help meet their obligations, without stipulating the approach individual platforms should take.
Some of the safety measures included in Ofcom’s new guidance are:
- prompts and time-outs, encouraging users to reconsider before sharing misogynistic content;
- de-monetising posts or videos promoting misogynistic abuse and sexual violence;
- setting volume limits on posts to help prevent mass-posting of abuse in pile-ons against women;
- allowing users to quickly block or mute multiple accounts at once;
- enhanced visibility restrictions, giving users greater control over who can view or interact with their profile;
- using “hash-matching” to detect and remove non-consensual nude images; and
- blurring nudity by default.
The UK regulator also asserts that online platforms are expected to test new services or features based on “abusability” before rolling them out, to identify how they might be misused by perpetrators.
In addition, moderation teams should receive specific training on online gender-based harms and companies should seek input from experts and survivors of online harm to improve their policies and practices; for example, by carrying out user surveys.
What are the consequences of non-compliance?
Severe financial penalties are available for online platforms that fail to fulfil their online safety obligations under laws such as UK OSA 2023 and EU Digital Services Act 2022 (DSA 2022). These range up to 10% of companies’ global annual revenue or £18 million (US$24 million) under the UK law, and fines of up to 6% of worldwide annual turnover for in-scope companies under EU DSA 2022.
Even in jurisdictions with less robust enforcement powers compared with the UK and EU, regulators are prioritising women’s online safety. For instance, Australia’s eSafety Commission cast a “spotlight” on digital gender-based violence in November.
Ofcom is urging online platforms to go further than legally required to protect women and girls. This is perhaps a reflection of how the risks associated with emerging technologies are currently outpacing the regulations.
With regulators around the world homing in on women’s online safety – particularly how to combat the emerging risks associated with AI – companies should consider demonstrating proactive compliance to avoid scrutiny.
For example, engaging with regulatory initiatives and consultations, seeking feedback from users and responding to their suggestions, and implementing enhanced safety measures that exceed minimum legal requirements.
See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.
Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool.
Stay up to date with key developments and in-depth articles by following Lexology’s Internet and Social Media Hub.