MEPs push for EU-wide social media age restrictions, UK trade unions fight TikTok moderator job cuts, and Singapore proposes new online safety rules – plus other key updates.

Shutterstock.com/metamorworks
Lexology PRO recently reported on TikTok’s appeal against an ICO information notice on its use of children’s data, Ofcom’s fine against 4chan and what it reveals about US/UK online safety tensions, the European Commission’s DSA requests for information from tech giants, and how AI companies are responding to child safety concerns.
Europe
Members of the European Parliament’s Internal Market and Consumer Protection Committee on 16 October adopted a report which proposes an EU-wide minimum age of 16 for access to social media and AI companions unless authorised by parents, and suggested that those under the age of 13 be kept off the platforms entirely. The report calls for EU measures to make online services safer for minors through rules on “persuasive technologies and practices” including targeted ads, influencer marketing, addictive design and loot boxes. It calls on the commission to make full use of its power under the Digital Services Act to “conclude its investigations imminently” and hold platforms accountable by issuing fines, effective corrective measures, or even banning non-compliant sites. Thirty-two MEPs had voted in favour of the report and five voted against; nine abstained.
Ireland’s Data Protection Commission (DPC) and Ireland’s media regulator Coimisiún na Meán on 16 October issued a joint statement on advancing the safety of children and the protection of their personal data online. The authorities reaffirmed their shared commitment to ensuring online services uphold children’s rights and best interests, noting the importance of balancing data protection-by-design and safety-by-design. “The DPC is committed to working alongside Coimisiún na Meán to uphold robust standards that safeguard children’s rights in the digital environment, where data protection and online safety are built in from the start,” the data protection regulator said.
Italy’s securities regulator, known as Consob, on 16 October said it blocked three websites for cloning the faces and voices of politicians to promote fake investment and cryptocurrency services. It said such scam ads used the authority of prominent institutional figures, including prime minister Giorgia Meloni and deputy prime minister Matteo Salvini. The regulator said it also blocked 14 other URLs which similarly engaged in abusive financial promotion and unauthorised intermediation activities, bringing the total number of websites Consob blocked since July 2019 to 1,443.
UK
The UK Parliament Science, Innovation and Technology Committee’schair Chi Onwurah said on 17 October that the government should do more to “to plug gaps” in the Online Safety Act, adding that it “seems complacent” about the viral spread of harmful misinformation. The government had rejected the committee’s recommendation that it pass online safety legislation which covers AI platforms on the basis that AI-generated content is already regulated under the act. Despite acknowledging that the monetisation of harmful content through digital ad models incentivises such algorithmic amplification of misinformation, the government “did not commit to acting” and instead stated it would keep the matter under review, the committee added.
Unions including the Trades Union Congress and a group of online safety campaigners on 13 October sent an open letter to UK Science, Innovation and Technology Committee chair Onwurah requesting an urgent investigation into proposed TikTok job cuts. “Every single redundancy is targeted at the ‘Trust and Safety Team’, effectively ending content moderation in London,” the letter noted, adding that these “safety-critical workers” are the frontline of protecting users from “deep fakes, toxicity and abuse”. TikTok is already facing an ICO investigation into its use of children’s data, and now the platform is looking to “replace skilled UK workers with unproven AI-driven content moderation” and offshore workers subject to “gruelling conditions” and “poverty pay”, the signatories added.
North America
Meta introduced new features on 21 October to help protect older adults from online scams, such as displaying warnings in WhatsApp when users attempt to share their screen with unknown contacts and adding AI scam detection tools on Messenger chats. The company said it has detected and disrupted almost 8 million accounts associated with criminal scam centers so far this year, adding that people aged 60 and over in the US lost a combined $4.8 billion to online fraud in 2024 alone. Meta also said it is working with the National Elder Fraud Coordination Center, a US-based nonprofit which brings together law enforcement and companies including Amazon and Google to fight fraud that targets older adults.
Instagram on 14 October began to gradually roll out content restrictions for teen accounts guided by PG-13 movie ratings. This means users under the age of 18 will automatically be placed into a 13+ content setting which they won’t be able to opt out of without parental permission, in order to make teens’ experiences “feel closer to the Instagram equivalent of watching a PG-13 movie”, it said. This includes hiding or not recommending posts with strong language, risks stunts and harmful behaviors, blocking mature search terms, and preventing teens from following accounts that share age-inappropriate content. It further introduced a 'Limited Content’ setting for parents who prefer to impose stricter controls.
The Digital Childhood Institute (DCI) on 14 October filed a complaint with the Federal Trade Commission alleging that Google knowingly exposes children to harmful content and facilitates exploitative contracts with minors across its ecosystem, including on Google Play, YouTube and Chrome. It further claimed that Google employs a deceptive age rating system, broken parental controls and allows third-party apps to collect children’s data in violation of the Children's Online Privacy Protection Act (COPPA). “Google’s so-called ‘safety features’ are a dangerous illusion,” said DCI president Melissa McKay. “By outsourcing age ratings to a meaningless two-minute questionnaire and cutting parents out the moment a child turns 13, Google isn’t protecting families. It’s betraying them.”
New York City filed a lawsuit on 8 October accusing social media giants including Facebook, YouTube, Snapchat, and TikTok of fuelling a mental health crisis among children through their addictive platforms. The 327-page complaint filed inManhattan federal court, seeks damages from the platforms' parent companies for causing “public nuisance” in the city and for gross negligence by intentionally engineering their platforms to young people “in a way they know is causing serious harm”. The lawsuit added that the resultant mental health crisis has caused damage to public health and safety, endangered the health and welfare of people and forced schools and hospitals to devote significant resources to address the issue.
APAC
India’s Ministry of Electronics and Information Technology today proposed strict new rules that would require AI and social media companies to clearly label AI-generated content to tackle the spread of deepfakes, including to spread misinformation, cause user harm and manipulate elections. Under the draft amendments to the Information Technology Rules 2021, which are now open for stakeholder feedback, social media companies must implement labelling requirements as well as other reasonable technical measures to confirm and label synthetically generated content. “These amendments are intended to promote user awareness, enhance traceability, and ensure accountability while maintaining an enabling environment for innovation in AI-driven technologies”, the ministry said.
Singapore’s Ministry of Digital Development and Information,along with the Ministry of Law, introduced a new Online Safety (Relief and Accountability) Bill for first reading in parliament on 15 October. The bill is notably set to establish a new Online Safety Commission in the first half of 2026 which will administer a statutory reporting mechanism to enable victims to seek timely remedies from online harms – the commission will be empowered to issue directions to take down harmful content and accounts and in cases of non-compliance, impose an access blocking or app removal order. Other key provisions include introducing statutory torts to allow individuals to seek redress from courts and hold perpetrators to account, and requiring user identity information disclosures.
The Office of the Australian Information Commissioner on 10 October published guidance for social media platforms and age assurance providers on compliance with privacy provisions under the Social Media Minimum Age rules (SMMA), which will take effect on 10 December. The data watchdog, which co-regulates the new rules alongside the eSafety Commissioner, set out the ‘out-of-bounds’ practices for entities handling personal data for age-assurance methods and highlighted the importance of minimising the inclusion of sensitive data and destroying collected data once purposes are met. “SMMA is not a blank cheque to use personal or sensitive information in all circumstances; we’ll be actively monitoring platforms to ensure they stay within the bounds by deploying age assurance proportionately and lawfully,” said Privacy Commissioner Carly Kind.
In addition, the eSafety Commissioner is regularly sharing information to help prepare for the incoming rules – most recently, it issued new resources for families, carers, educators and young people explaining the changes and proposing action plans for the transition. The package, published 16 October, includes a dedicated hub with FAQs, practical get-ready guides for parents and tailored information for educators.