Legal teams are under rising scrutiny to use AI responsibly. At Lexology Live Asia 2025, experts shared practical guidance on governance, risk controls, and avoiding AI washing.

Moderated by Sophia Yap, chief legal officer at the Hong Kong Broadband Network, one of the panels featured leading legal experts and GCs from prominent companies and innovative technology ventures. The panel explored how legal teams can harness AI responsibly while managing regulatory and operational risks.

Human oversight is non-negotiable

AI can accelerate research and drafting, but it cannot replace professional judgment. Timothy Lam, Long An & Lam partner, pointed to recent cases where lawyers faced sanctions for citing fabricated precedents generated by AI tools. “Courts are starting to issue practice directions requiring disclosure of AI use,” he noted, referencing rules in Australia that mandate lawyers to confirm accuracy when AI assists in submissions. Yap reinforces this well: “You can’t sue the AI, right?”

Athena Ang, head of legal at the Hong Kong Technology Venture Company, framed the issue succinctly: “The real problem isn’t how often we use AI; it’s how responsibly we use it.” Her advice was to apply established principles of quality assurance and accountability. Every AI-generated output must be verified, just as lawyers validate any other source.

Governance and risk controls are paramount

Governance is central to any AI compliance framework. Toys “R” Us Asia GC Adam Au stressed that risk assessments must precede deployment: “Impact assessment is always key because it forces you to verbalise the risks and mitigation strategies.” He warned against the misconception that obligations end once a tool is implemented: “It’s the beginning of your sleepless nights.”

Athena emphasised the importance of knowing the capabilities, limitations, and vulnerabilities of AI. Failure to do so can lead to regulatory exposure and reputational harm. She cited enforcement actions, including the UK Advertising Standards Authority’s ruling against misleading AI claims and the SEC’s penalties for false disclosures. Her advice: “Treat AI like KYC. Know your AI before you trust it.”

GC Raymond Goh of Quantum Intelligence added a technical perspective, advocating for strong encryption, layered monitoring, and human-led escalation protocols. “There’s a lot of pride in saying systems run without humans. From a risk perspective, that’s not good,” he cautioned.

Internal governance alone, however, is not enough. Legal teams must strengthen in-house capability and insist on transparency from vendors. “Any advisor we bring in has to understand scalability. Templates must be drafted so they can be automated,” Low advised in a related discussion.

AI washing must always be addressed

The panel highlighted the growing risk of AI washing, the practice of exaggerating or misrepresenting AI capabilities. Ang defined it as “making false or misleading claims about AI use,” citing examples such as Builder.ai, which collapsed after revelations that its flagship AI solution, “Natasha”, marketed as a no-code AI platform, was in fact a front for 700 human engineers manually producing code in India. Amazon has also faced scrutiny when it was revealed that its “Just Walk Out” technology involved manual intervention for most transactions.

These cases underline the need for rigorous due diligence. Goh advised drilling down into technical explanations and IP protections. Legal teams should demand transparency and avoid being swayed by marketing narratives.

People must be at the heart of managing AI risks

Managing emerging risks such as AI washing requires vigilance, which the panel linked to building resilient compliance frameworks and prioritising investment in people.

The panel emphasised that compliance frameworks must be supported by investment in people.  Yap urged lawyers to “keep learning,” while Lam reminded the audience that “AI can’t give you the nuanced tactics that top lawyers provide. That’s where human value lies.” Emotional intelligence, adaptability, and technical awareness are now more important than being able to interpret the law, a point echoed in another session during the event, which can be accessed here.

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas, including effective dates, reporting deadlines and changes to regulations.