TikTok blocked in Albania, UK warned against watering down online safety laws and Utah signs landmark online child safety law – plus other key updates.
.jpg?VersionId=eGwseayu_cQu479Uk8YubQVbelBSgf.d)
Shutterstock.com/metamorworks
This Lexology PRO key update rounds up key developments in global content moderation policy, enforcement and litigation.
In March Lexology PRO reported on Ofcom’s warning that more UK adult sites could face age assurance enforcement, NetChoice’s First Amendment challenge against a California law that restricts minors’ social media use, new requirements for app distributors in Singapore, Ofcom’s seven-figure age assurance information request fine, a DSA complaint by European digital rights groups against X over ‘trusted flagger’ forms, the UK Online Safety Act provisions coming into effect and insight from Ofcom public policy director Kate Davies on the Online Safety Act.
EU
The EU could announce a fine of more than $1 billion against X this summer over Digital Services Act (DSA) violations, The New York Times reported on 3 April citing people familiar with the plan. The action is set to include a fine as well as demands for product changes, the report said, adding that EU regulators are considering calculating the fine based on revenue that includes Elon Musk’s other companies such as SpaceX. But X’s government affairs team said in a post that if the reports are accurate, such enforcement by the European Commission “represents an unprecedented act of political censorship and an attack on free speech.”
TikTok will be blocked in Albania for one year, starting 13 March, media site BalkanWeb reported, citing the Authority for Electronic and Postal Communications (AKEP). The decision to ban the platform was revealed by Prime Minister Edi Rama in December amid concerns over violence and bullying among children, reportedly prompted by the stabbing of a teenager following arguments on social media. AKEP has instructed all internet service providers in Albania to block access to TikTok and submit written confirmation to the authority, the report added.
The European Parliament Committee on the Internal Market and Consumer Protection held a public hearing yesterday on the protection of minors online. Head of Digital Services Unit at the European Commission Prabhat Agarwal said the regulator’s upcoming guidelines on DSA article 28, which governs the online protection of minors, will establish a common, single market approach to ensure consistent protection throughout the EU. He noted that they will take into account issues raised during a recent call for evidence, including concerns over recommender systems, interface design and age assurance and verification requirements. Agarwal added that the draft guidelines, which will determine the commission’s enforcement standards and be used as a compliance benchmark, should be on the table “before the summer”.
UK
A draft transatlantic trade agreement contains commitments by the UK government to review enforcement of the Online Safety Act in order to avoid Trump’s tariffs, Politico reported in its London PlayBook newsletter on 3 April, citing people briefed on the pact. The report added that the legislation, along with the Digital Markets and Competition Act, would be subject to a regulatory review of its implementation, “not a do-over”. But UK child safety campaigners warned against such watering down of the regulation. Molly Rose Foundation chief executive Andy Burrows said in a letter he was “dismayed and appalled” by the government’s apparent willingness to trade off children’s safety to avoid tariffs.
North America
Trade group NetChoice on 18 March filed a lawsuit seeking to halt a new Louisiana law that requires social media platforms to obtain parental consent for minors to create accounts, “make commercially reasonable efforts” to verify the age of users, and prohibits the use of targeted advertising on children. According to the group, whose members include YouTube, Meta and Google, the law unconstitutionally blocks childrens’ access to protected speech by conditioning that access on whether the individuals are willing to surrender their private information. They added that the act, which is set to take effect in July, is not properly or narrowly tailored to address potential risks arising from social media use on minors.
OpenAI model behavior lead Joanne Jang said in a 27 March blog post the company has “evolved” its approach to content moderation in relation to image generation.“We’re shifting from blanket refusals in sensitive areas to a more precise approach focused on preventing real-world harm,” Jang said. “The goal is to embrace humility: recognizing how much we don't know, and positioning ourselves to adapt as we learn.” For example, rather than completely banning hate symbols, which Jang said could erase “meaningful conversations and intellectual exploration,” the company is developing technical methods to better identify harmful misuse.
US House Judiciary Committee chairman Jim Jordan on 6 March subpoenaed Alphabet, seeking its communications with former president Joe Biden's administration about content moderation policies. The committee said it is investigating the extent to which the former administration “coerced or colluded” with companies to censor lawful speech, adding it obtained documents showing YouTube was pressured into censoring certain lawful content. Meta said in August that the Biden administration had pushed it to censor Covid-19 content; "Alphabet, to our knowledge, has not similarly disavowed the Biden-Harris Administration's attempts to censor speech," Jordan said in a letter to the company.
On 26 March Utah Governor Spencer Cox signed the App Store Accountability Act into law, imposing requirements on major app store providers to verify user ages and obtain parental consent for minors downloading certain apps. Meta, Snap and X applauded Cox and Utah in a joint statement “for being the first in the nation” to empower users with greater control over teen app downloads, and urged other states to consider similar approaches. Meanwhile, Google criticised the act in a 12 March blog post, arguing it raises privacy risks and accusing Meta and other companies of trying to “offload their own responsibilities to keep kids safe to app stores”.
APAC
Australia’s eSafety Commissioner said in a report on 6 March that tech companies can do more to tackle the proliferation of terrorist and violent extremist material and activity on their platforms. The transparency report summarises responses the commissioner received from Google, Meta, WhatsApp, X, Telegram and Reddit on the steps they are taking to limit harmful content. The regulator noted serious gaps and safety deficiencies; for example, it said there was no mechanism for users not logged-in to Facebook or YouTube to report livestreamed terror and violent extremism. WhatsApp took more than a day, Threads 2.5 days and Reddit 1.3 days to respond to user reports of harmful content, the report added.
The Australian eSafety Commissioner further said on 28 March that it is appealing against the Administrative Review Tribunal finding that the tribunal has jurisdiction to review its content removal alert. The tribunal had ruled in February that a notice sent by an eSafety Commissioner employee to X over an allegedly harmful post amounted to a formal removal notice under the OSA that X treated as a mandatory legal requirement. The Tribunal made legal errors about jurisdiction, the regulator argued, adding that the appeal will provide “further clarity around the practice of regulators bringing to a platform’s attention material that potentially breaches their terms of service.”
Wikipedia’s operator has challenged a New Delhi court decision ordering it to remove content from its page, Reuters reported on 7 April citing two sources with direct knowledge of the matter. Indian news agency ANI had sued US-based Wikipedia for defamation last year seeking the removal of statements allegedly describing it as an agency facing criticism for being a government "propaganda tool". The court had ruled last week that the statements, which are defamatory and tarnished ANI’s professional reputation, should be taken down.
Africa
A Kenyan court has ruled that US tech company Meta can be sued in the country over its alleged role in promoting content that led to ethnic violence in Ethiopia, Reuters said on 4 April, citing a plaintiff in the case. The lawsuit calls on the company to alter its algorithm to stop promoting hateful material and hire more content moderators in Africa. Kenya’s high court has reportedly rejected arguments by Meta that local courts do not have jurisdiction to hear cases against it where it is not registered as a company.