AI compliance countdown: key global deadlines to prepare for in 2026

Updated as of: 18 September 2025

Legal teams need to maintain risk management programmes, identify high-risk AI systems and take proactive steps to prevent algorithmic discrimination, to comply with the growing number of AI regulations globally.

Shutterstock.com/NicoElNino

As AI is evolving rapidly, so too are the legal frameworks to govern the technology. EU AI Act 2024 – the majority of which will fully enter into force next year – remains the most far-reaching and comprehensive legislation globally. 

However, China and Korea have also adopted regulatory AI frameworks, and four other jurisdictions have proposed legislation. At the federal level, the US is taking a light regulatory approach, with state legislators leading the way in enacting more focused AI legislation to target specific concerns, such as AI in employment and healthcare settings. 

This has resulted in a global regulatory landscape that is fragmented and rapidly evolving, presenting challenges to companies that deploy or develop AI across multiple regions. In order to maintain compliance, it will be vital that businesses pay close attention to legislative developments, while ensuring their models and tools are compliant with established laws governing privacy, copyright and employment, among others AI-related areas.

Lexology PRO sets out the key AI compliance deadlines approaching in 2026 and practical steps for compliance teams. 

Key compliance deadlines

US

1 January 2026

TAIGA prohibits companies operating in Texas from developing or deploying AI systems for certain high-risk purposes, such as behavioural manipulation, unlawful discrimination and social scoring. 

Companies that develop or deploy AI systems in Texas should audit their models to ensure they are not prohibited under the law and update their internal processes to reflect its provisions.

CAITA require developers of generative AI (genAI) models to publicly disclose on their websites information about the data they use to train their systems, including the number of data points, types of data and whether the training data includes personal information. 

The disclosure requirement applies to all genAI systems released on or after 1 January 2022 and made available to residents of California.

GenAI developers will need to develop processes for tracking and recording the training data they use, to ensure their disclosures are accurate and thorough. "Developers" are defined as those that design, code, produce, or "substantially modify" a gen AI system.

AB 1836 forbids the creation or distribution of digital replicas of deceased persons without explicit consent from their estate. For example, recreating the voice of a deceased singer to make new records using their likeness. 

  • Amendments to the Illinois Human Rights Act regulating AI in employment (HB3733)

HB3733 amends Illinois Human Rights Act to govern how employers may use AI for employment decisions. Employers are expressly prohibited from using AI in a manner that results in discriminatory outcomes. What’s more, employers must disclose to workers when AI is being used for recruitment, promotion, renewal of employment and disciplinary decisions, among others. 

Employers will need to establish which of their HR platforms and processes incorporate AI, implement measures to prevent discrimination and draft notices to inform employees about AI use where required. 

30 June 2026

CAIA applies to any person doing business in Colorado who develops an “AI system” or deploys a “high-risk AI system.” The Act establishes liability for both developers and deployers of AI if their systems produce discriminatory outcomes, even without intent. 

It introduces new requirements for deployers, including carrying out an impact assessment before a high-risk system is first used and repeating assessments at least annually. Companies must also notify consumers when they have deployed a high-risk AI system in decision-making, or when AI is a substantial factor. 

CAIA provides “safe harbour” protections for those businesses that comply, which should incentivise in-scope companies to maintain risk management programmes, perform regular impact assessments and document their AI risk-management measures. 

APAC 

22 January 2026

AIBA establishes obligations for developers and users of high-impact and gen AI systems, users referring to entities that offer AI products or services to the public.

For example, businesses offering high-impact AI or gen AI must disclose when a product uses AI and clearly label AI-generated outputs.

Additional obligations for businesses offering high-impact AI services or products include developing and implementing a risk management plan, as well as maintaining documentation on safety and reliability measures, among others.

To prepare for compliance, companies should conduct an audit to establish which of their AI systems fall under the regulated categories, implement the necessary risk-management procedures and collate the necessary documentation. 

Companies without a registered business address in South Korea, but that exceed certain user or sales thresholds, will need to designate a domestic representative. 

10 December 2026

  • Transparency requirements for automated decision-making take effect under Australia’s Privacy Act

Amendments to Australia’s Privacy Act 1988 include new transparency requirements around automated decision making (ADM). Organisations that carry out ADM are required to provide information about the types of personal information being processed and the kinds of decisions made solely by ADM systems in their privacy policies. 

EU

2 August 

  • EU AI Act 2024 provisions related to high-risk AI systems

EU AI Act 2024 imposes strict requirements on providers and deployers of high-risk AI systems, including risk management, data quality, transparency, human oversight and accuracy obligations.

The definition of high-risk systems is broad, encompassing AI models used for recruitment, as safety components in critical infrastructure and biometric identification, among numerous others. 

Obligations related to these systems include mandatory fundamental rights impact assessments, conformity assessments, data governance, as well as transparency requirements. Companies must identify foreseeable risks that high-risk AI systems could pose to the health, safety, or fundamental rights of users and adopt targeted risk management measures.

Direction of travel

Everywhere in the world, AI regulation is intended to minimise risk, ensuring safe, ethical and transparent AI development that does not compromise the fundamental rights and freedoms of individuals. 

The jurisdictions that have enacted comprehensive legislation have broadly opted for a risk-based approach, imposing the strictest requirements on AI models deemed to pose the greatest potential for harm.

This also explains the uptick in more targeted regulations to govern AI applications that have a significant impact on individuals’ rights and access to healthcare or employment.

While many jurisdictions, including the UK, appear to have opted for a lighter-touch approach to AI regulation, globally, the number of new laws seems set to increase as AI becomes increasingly integrated into society. 

Canada is advancing the AI and Data Act; Brazil is actively developing a comprehensive AI Bill, and Japan is preparing the Basic Law for the Promotion of Responsible AI, to name a few comprehensive AI regulations currently in development. 

Next steps

Broadly, it will be necessary for companies that develop and deploy AI systems to audit which of their products, services or tools could be considered “high risk” under the various laws and maintain a firm understanding of what the risks, particularly if they could interfere with fundamental rights and freedoms or lead to discrimination. It may also be necessary to document the data used for training models and provide transparent disclosures to users about when AI is being used and for what purpose. 

See Lexology PRO’s interactive Compliance Calendar for key upcoming deadlines and dates in core compliance areas throughout 2025, including enforcement dates, reporting deadlines and changes to regulations.   

Track the latest data protection updates from authorities around the world using Scanner, Lexology PRO’s automated regulatory monitoring tool. 

Stay up to date with key developments and in-depth articles by following Lexology’s AI hub.