Content moderation and online safety: key updates (5 – 19 November)

Updated as of: 19 November 2025

X content moderation mechanisms probed in Ireland, tech giants ramp up fight against US bills on minors' social media use, and an advocacy group pushes for an ‘Africa eSafety Commission’ – plus other key updates.

Shutterstock.com/metamorworks

Lexology PRO recently reported on a General Court challenge against VLOP user number calculations as well as other upcoming EU cases addressing major DSA flashpoints, Singapore’s new online safety bill, and Ofcom’s decision to progress its suicide forum probe.

Europe

Dutch privacy group Bits of Freedom on 17 November accused Snapchat of using misleading notifications, in violation of Digital Services Act (DSA) rules on manipulative design, to attract user attention and create “the ideal conditions for excessive smartphone use”. According to research by Bits of Freedom, Snapchat often issues “recapture” notifications that contain false information, misleadingly resemble messages from other users or are incorrectly labelled as time-sensitive, especially to users who are not actively using the platform. The group called on the Authority for Consumers and Markets to take enforcement action against the platform, which is widely used by young people, and recommended that rules on attention-grabbing notifications be included in the EU Digital Fairness Act.

Italy’s privacy regulator, known as the Garante, on 14 November published a guide for minors, parents and other users on the safe and responsible use of social media platforms. The regulator noted that the widespread exchange of information on such platforms amplifies the risk of improper or fraudulent use of personal data, which could result in identity theft, abuse and the dissemination of unverified information. The Garante further set out practical tips for users including the importance of checking privacy settings, reading terms and conditions and avoiding oversharing and suspicious messages. 

Ireland’s media regulator, known as the Coimisiún na Meán, on 12 November said it is investigating X’s compliance with user complaint-handling requirements under article 20 of the DSA. The regulator said it will assess whether the platform properly informs users of the outcome of their content moderation reports, allows users to contest decisions by X to not remove content reported as illegal or harmful, and has in place user-friendly internal complaints-handling mechanisms. Digital services commissioner John Evans said that the regulator has “reason to suspect that X may not be in compliance” with such DSA provisions based on its “analysis of information gathered from a variety of sources”, highlighting that the right to appeal decisions “is an essential right and a cornerstone” of the regulation.

UK

The House of Commons Science Innovation and Technology Committee said on 13 November that TikTok failed to share evidence to back up its claim that reducing staffing in its trust  and safety team in favour of increasing the use of AI would improve the “efficacy of moderation”. Committee chair Dame Chi Onwurah had last month sent a letter to TikTok asking for clarity on online safety amid the proposed job cuts, including whether the company assessed risks to user safety and Ofcom’s response to the reductions.TikTok’s response did not include data or risk assessment that justified its claims, the committee said, adding that the platform did not show that this transition to AI won’t lead to more user harm. “This is deeply concerning, as the committee has heard time and time again – from TikTok itself and many others – that there are limitations to AI moderation,” Dame Onwurah said.

On 12 November, Ofcom provided a progress update on its approach to implementing the UK Online Safety Act which includes a schedule on when it will deliver outstanding codes, guidance and statutory reports. Ofcom notably revealed that it aims to publish the overdue categorisation register, which will reveal which platforms are subject to the act’s toughest rules, and consult on additional duties for categorised services around July 2026. The regulator said it adjusted its plans with regards to categorisation regulations following Wikipedia’s High Court challenge against the thresholds. “We will now be carrying out a representations process in early 2026, which will give the services that we believe meet the threshold conditions an opportunity to comment on our provisional decisions before we finalise the register,” it said.

Ofcom on 11 November responded to a letter it received from a group of bereaved families and survivors of an online suicide forum which had criticised its October decision to not proceed with enforcement action against the site under the Online Safety Act. The regulator had cooled down its probe after the forum blocked access to UK IP addresses but on 6 November revealed its decision to once again prioritise the investigation following reports that the site remains accessible to UK users. Ofcom group director Oliver Griffiths emphasised in the letter that “at no point was the investigation closed”, adding that it immediately took action after receiving evidence of the existence of a mirror site available to people in the UK. 

North America

Trade association NetChoice on 17 November sued to stop a new Virginia law targeting minors' social media use before it goes into effect in January. It argued that the bill “is the latest attempt in a long line of government efforts to restrict new forms of constitutionally protected expression based on concerns about their potential effects on minors”. It noted that the requirement that social media platforms verify user ages and impose a daily one-hour usage time limit for users under 16 violate the first amendment by restricting access to valuable and lawful resources for expression. “Virginia’s government cannot force you to read a book in one-hour chunks, and it cannot force you to watch a movie or documentary in state-preferred increments," said NetChoice litigation center co-director Paul Taske. “That does not change when the speech in question happens online.”

On 13 November, a group of social media giants including TikTokMetaYoutube and Google filed separate lawsuits in California seeking to prevent the enforcement of the Protecting Our Kids from Social Media Addiction Act on First Amendment grounds. The platforms argued that the rules, which would restrict minors from accessing personalised social media feeds without parental consent, violate their rights to curate third-party content and burdens constitutionality protected activities. For example, TikTok said in its lawsuit that the act would fundamentally alter the platform’s experience, which is based on delivering curated content it believes will interest users, and would allow minority viewpoints to be potentially drowned out. 

On 6 November, the Social Media Victims Law Center and Tech Justice Law Project said they have filed seven lawsuits against OpenAI and its chief executive Sam Altman claiming that the ChatGPT 4o model lacks effective safeguards, contributing to suicides and mental health issues. Filed on behalf of seven ChatGPT users in different California state courts, the lawsuits accuse OpenAI of knowingly releasing the model prematurely and compressing safety testing, despite receiving internal warnings about its “psychologically manipulative” behaviour, in order to benefit from increased user engagement and market share. The lawsuits allege that OpenAI has the technical ability to detect and interrupt dangerous conversations, redirect users to crisis resources and flag chats for human review but chose not to activate such safeguards, adding that its design choices exploited mental health struggles and reinforced harmful delusions. 

APAC

Australia's eSafety Commissioner said on 10 November that it has teamed up with the European Commission Directorate-General for Communications Networks, Content and Technology (DG CNECT) and Ofcom to strengthen global cooperation on child online safety. In a joint statement, the regulators said they intend to establish “a technical trilateral cooperation group on age assurance to deepen cooperation on this pressing issue” and explore related solutions, technological developments and best practices. They also affirmed their individual commitments to continue implementing and supervising online safety legislation in their respective jurisdictions and ensure platforms effectively mitigate risks to children and protect them from harmful content. 

Africa & the Middle East

Research and advocacy organization Child Online Africa (COA) on 17 November called on global leaders attending the G20 Summit in South Africa to establish a continental Africa eSafety Commission tasked with safeguarding children, women, the elderly and other vulnerable groups in the digital world. COA noted that uneven regulatory frameworks and fragmented implementation of safety policies are struggling to cope with surging online harm; “what is missing is an independent continental body with real teeth: one that can investigate abuses, require take-downs, coordinate law enforcement, and hold tech companies accountable,” the group said. It urged G20 chair and South Africa president Cyril Ramaphosa to champion the formal launch and funding of the Africa eSafety Commission, which could help set continental digital safety standards, raise awareness, and monitor and enforce against tech platforms.