From AI audits to child-safety claims, regulatory pressure is reshaping how tech companies build, govern, and defend their products. Here’s what in-house teams in the technology sector must prioritise now.
fa0a.jpg)
Shutterstock.com/PeopleImages
In-house counsel specialising in tech face accelerating pressure on multiple fronts.
Insights from Lexology over the past quarter show that AI governance is being rewritten, online-safety obligations are expanding, cyber-resilience mandates are tightening, biometric litigation is accelerating and competition authorities are targeting digital gatekeepers.
These are no longer isolated compliance tracks, they are collectively rewriting how products are built, how platforms are governed and how cross-border risk is managed.
The pace and breadth of this change means in-house teams cannot afford siloed responses, instead, coordinated cross-functional action is now essential to stay ahead.
Online safety & platform accountability: child safety and harmful content drive stricter duties
Governments worldwide are adopting tougher regimes to curb online harm, with a particular focus on misogyny, children’s safety, platform accountability and algorithmic risk.
Lexology’s coverage shows:
- Ofcom is urging platforms to implement “enhanced safety measures” targeting online misogyny, signalling more prescriptive duties around content moderation and proactive detection.
- Italy and the UK are pursuing large-scale enforcement actions against platforms over children’s exposure to harmful content (e.g TikTok is facing mass child-safety claims in Italy and continued ICO scrutiny in the UK).
- Regulators are increasing pressure on AI companies to implement child-safety safeguards, treating AI assistants similarly to online platforms and requiring proactive risk-mitigation measures.
- Australia is moving to require chatbot and AI-powered services to disclose child-safety measures, expanding online-safety duties to AI assistants.
- Across Southeast Asia, regulators are shifting to platform liability over user responsibility, requiring robust trust & safety systems and more granular reporting.
- Minimum age requirements, user verification, and mechanisms to remove harmful content are emerging regulatory expectations for online platforms across MENA.
What does this mean for businesses?
Tech counsel should map safety obligations across jurisdictions, reassess moderation workflows, review age-assurance tools, and strengthen algorithmic transparency documentation. Prepare for more active supervision, higher reporting expectations, tougher fines for non-compliance and cross-market convergence around child safety.
AI regulation: centralised enforcement, lighter training rules, new compliance risks
AI governance is undergoing a revolution globally. The EU’s AI Act remains the anchor regime, but revisions now under discussion could centralise enforcement, reduce some burdens for developers, and reshape reporting obligations.
Lexology’s coverage shows:
- The EU proposes streamlining obligations for foundation-model developers, including more flexible training-data documentation requirements.
- At the same time, enforcement may become more centralised, meaning more coordinated investigations and fewer national variations.
- Globally, regulators are shifting from principle-based to risk-based AI audits, pushing for transparency around model behaviour, data provenance, and safety controls.
- Companies developing conversational agents and voice-based systems face heightened scrutiny due to biometric and child-safety risks, as well as mental health risks.
What does this mean for businesses?
Legal teams should refresh AI governance frameworks, document model-training inputs, formalise risk assessments, and update vendor/partner contracts for algorithmic transparency. Monitor AI Act adjustments closely as early alignment will be key.
Biometric and voice-data litigation: BIPA-style risk spreads worldwide
Biometric data is becoming a frontline litigation and enforcement risk for tech companies and the trend is becoming global. Several EU data protection authorities, Hong Kong regulators and ASEAN digital-safety agencies are targeting biometric-enabled products.
Lexology’s coverage shows:
- A US federal court certified a Biometric Information Privacy Act (BIPA) class action against Amazon over Alexa voiceprints, confirming that voice-capture technologies fall squarely within biometric-data laws.
- There is intensifying action against companies using facial recognition, voice assistants, and emotion-analysis tools.
- Regulators increasingly treat biometric capture as high-risk processing, especially when used on children, for behavioural advertising, or in chat/AI interfaces.
What does this mean for businesses?
Audit all biometric and voice-processing uses. Update privacy notices, purge unnecessary retention, and incorporate explicit consent and deletion rights. Embed biometric-specific DPIAs for product teams building voice or image tools.
Cybersecurity: national security, resilience and incident reporting drive new obligations
Tech companies face intensifying cybersecurity regulation across all major markets. For tech companies whose core value often lies in data and platform trust, these trends present material, operational and liability risks.
Lexology’s coverage shows:
- Cybersecurity laws in Hong Kong are pushing companies toward mandatory compliance programmes, threat-sharing, and tighter audit exposure.
- Authorities globally are identifying 2025-2026 cyber priorities, including AI-enabled attacks, supply-chain exposure, and critical-infrastructure dependencies.
- New frameworks require faster breach notification, enhanced penetration testing, and improved software supply chain assurance.
- Boards and senior management are increasingly expected to demonstrate cyber literacy and accountability.
What does this mean for businesses?
Reassess cyber-resilience programmes, stress-test supply-chain dependencies, formalise board-level oversight, and align incident response plans with multi-jurisdiction reporting timelines. Consider cross-functional tabletop exercises tied to ransomware and AI threats.
Competition & digital-markets enforcement: gatekeepers under unprecedented scrutiny
Digital-markets enforcement is entering a more aggressive phase, aimed squarely at dominant tech players and high-risk data practices. For in-house counsel, this marks a shift from traditional antitrust review to always-on digital compliance affecting design choices, self-preferencing, search ranking, and data combination.
Lexology’s coverage shows:
- The UK CMA issued its first-ever Strategic Market Status (SMS) designation (targeting Google’s search business) which opens the door to sweeping conduct obligations, interoperability requirements, and potential fines.
- The EU’s General Court is being asked to annul record non-compliance fines against Apple and Meta, underscoring how high the stakes have become for DMA compliance.
- Courts are scrutinising pricing and market-power cases more aggressively, including claims that UK authorities failed to bring a “slam dunk” case against Google.
- Securities litigation is also rising: Snap’s recent US$65 million settlement signals growing investor claims tied to disclosure, growth metrics, and algorithmic risks.
What does this mean for businesses?
Conduct competition risk audits, map data-integration practices, review algorithmic decisioning for self-preferencing risk, and strengthen internal reporting lines to monitor DMA/Digital Markets-style obligations across markets.
Preparing for the next phase of tech regulation
In-house teams in tech face simultaneous upheavals in AI oversight, online safety, cybersecurity, biometrics, and competition enforcement. Enforcement across each area is tightening and each intersects with core aspects of how technology products are built, deployed, and governed.
The companies that act now by updating AI documentation, auditing biometrics, reinforcing cyber controls, aligning content-moderation frameworks, and preparing for digital-competition obligations will be best positioned to navigate the wave of regulatory and litigation risks inevitable coming this way.