France opens criminal probe into TikTok’s algorithms, new OpenAI models allow sites to classify harms, and Australia updates list of age-restricted social media platforms – plus other key updates.

Shutterstock.com/metamorworks
Lexology PRO recently reported on the UK data regulator’s notice of intent to fine Reddit over age assurance concerns, legal notices by Australia’s online safety regulator to four AI companion chatbot providers, and the European Commission’s preliminary findings against Meta and TikTok over moderation rules.
Europe
The Paris public prosecutor said on 4 November that it opened a criminal investigation into TikTok over the impact of the platform’s algorithms on children’s mental health. It follows a parliamentary committee inquiry which found evidence of insufficient moderation by TikTok and claimed that its algorithms likely push vulnerable people to suicide-related content. It instructed the Paris police cybercrime brigade to look into offences including providing a platform for “illicit transactions linked to organised crime”, which is punishable by up to 10 years’ imprisonment and a €1 million fine, and “propaganda for products, objects or methods recommended as a means of committing suicide”, punishable by three years’ imprisonment and a €45,000 fine.
Italy’s communications authority, known as AGCOM, said on 31 October that the country is implementing mandatory age verification for access to adult content sites starting from 12 November. It will require specified entities that distribute pornographic content in Italy, regardless of their country of establishment, to verify that users are over 18 through certified third party systems, and comply with data security and minimisation requirements with regards to their age verification mechanisms. Sites that fail to comply with this requirement will face potential warnings and fines of up to €250,000, AGCOM added.
UK
UK Science, Innovation and Technology committee chair Dame Chi Onwurah on 28 October sent a letter to TikTok asking for clarity on online safety amid proposed cuts to its Trust and Safety teams. The letter raised concern over the tech company’s ability to effectively protect users from harmful content and disinformation following news that it plans to cut its moderation staff due to the deployment of AI and third-party specialists. It asked TikTok to clarify outstanding concerns including the total number of proposed job losses, whether it assessed risks to user safety and Ofcom’s response to the staffing reductions. TikTok had written to the committee on 20 October after the Trade Union Congress and others requested an urgent investigation into the cuts, emphasising that its plans are intended to improve efficiency within its moderation processes.
A report published by the UK Speaker’s Conference on 27 October has called for urgent action “to end abuse of MPs and candidates”, including stronger oversight of social media platforms and the introduction of an elections code of practice by Ofcom. The report noted that the vast majority of such intimidation takes place online and therefore could in theory be addressed through the effective implementation of the Online Safety Act. It urged the government to start preparing for future legislation to address emerging threats such as disinformation and deepfakes, and to ensure that Ofcom’s oversight extends to platforms' policies for the removal of abusive and illegal content targeting public figures.
North America
A US federal judge on 31 October narrowed a lawsuit brought by a group of parents against Roblox alleging that the gaming platform facilitates an “illegal gambling ring” targeted at children. Judge Vince Chhabria dismissed claims of unlawful competition and negligence, noting that they did not apply to Roblox’s alleged conduct – for example, he found that one of the cited laws applies only to entities who take “affirmative action” to facilitate gambling rather than those who, like Roblox, simply knowingly allow and profit from such operations. In addition, he said that other statutes brought under the unlawful competition prong target physical devices such as slot machines rather than online games. A spokesperson for Roblox told Lexology PRO: "We are pleased the court significantly narrowed the claims remaining in the case.”
Chatbot service provider Character.AI said on 29 October that it is banning children under 18 from engaging in open-ended chats with AI from 25 November. The provider said it will also roll out new age assurance measures using an in-house model combined with third-party tools, and that it plans to establish an independent non-profit ‘AI Safety Lab’ to test and advance safety techniques for “next-generation AI entertainment features”. The company said it is introducing these measures in response to regulators expressing concerns over the content teenagers were exposed to when using the open-ended chat feature, adding that it believes this is “the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology.”
OpenAI on 29 October released two new “open safety” reasoning models to help websites identify and classify online harms. The models, called ogpt-oss-safeguards, consider the content that developers wish to classify under specified policies, and outputs conclusions about where content falls, along with its reasoning. It noted that organisations can tailor the models to their specific policy needs; for example, video gaming forums might want to develop a policy to classify posts that discuss cheating in the game, or a product reviews site might want to use its own policy to screen reviews that appear likely to be fake, it said.
A group of US senators on 28 October introduced a bipartisan bill to ban AI companies from offering their AI chatbot companion services to children. Known as the GUARD Act, it requires AI chatbot providers to verify user ages and clearly disclose to users that they are not humans or licensed professionals, and establishes new criminal penalties for companies that knowingly provide AI companions to children that solicit or produce sexual content. "Our legislation imposes strict safeguards against exploitative or manipulative AI, backed by tough enforcement with criminal and civil penalties,” said the bill’s co-sponsor Richard Blumenthal.
APAC
Australia's eSafety commissioner said on 4 November that it has informed Facebook, Instagram, Snap, Threads, TikTok, X, YouTube, Kick and Reddit that they have been classified as age-restricted platforms under the upcoming Social Media Minimum Age rules. It noted the nine platforms under the updated list – which meet the key requirement of having a “sole or significant purpose” of enabling online social interaction – must take reasonable steps to ensure that users under 16 do not hold accounts from 10 December; any failure to do so could result in enforcement action and civil penalties of up to A$49.5 million (€28 million). The commissioner also notified several platforms including Discord, Roblox, Google Classroom and YouTube kids, that they do not currently fall under the age-restricted category, but noted that it may reassess classifications as new platforms or features emerge.
In addition, the Australian eSafety commissioner on 28 October said Apple and Google have removed video chat service OmeTV from their Australian app stores following correspondence with the regulator. The commissioner had noted that OmeTV failed to engage with the regulator, despite receiving a formal warning regarding alleged breaches of Australia’s codes over its failure to prevent child grooming and sexual exploitation on the service. The commissioner noted that Google and Apple ultimately took action as a part of their obligations under the App Store Code.
Pakistan’s Telecommunications Authority has collaborated with Meta to introduce ‘teen accounts’ to enhance online protection for young users in the country, the Associated Press of Pakistan said on 29 October. According to the report, the new feature will automatically place users under 16 into teen accounts settings, restricting unwanted contact and filtering sensitive content while “allowing safe digital exploration” on the platform.
Africa & the Middle East
A new “State of Online Harms” report by media company Gatefield, presented on 27 October, has revealed that 50% of online users in Nigeria experience regular harm and warned that existing legal frameworks such as the Cybercrimes Act fail to tackle the rising threat. The report, which surveyed more than 500 Nigerian internet users, additionally found that 58% of online harms target women, that 31% of reported harmful online content is never removed and that 34% of online harms took place on the X platform. It highlighted that tech companies operating in Nigeria must establish local content moderation teams with cultural and language expertise and called on the nation to enact a comprehensive Online Safety bill to explicitly address online threats.