The countdown for the application of the EU’s AI Act has begun, uncertainties about its interpretation remain.

Shutterstock.com/NovikovAleksey
The landmark AI Act came into force today. While enforcement of the majority of its obligations will not begin until August 2026, the first wave of provisions will begin to apply in as early as six months from today.
EU lawmakers first proposed the act in 2021. Following a series of occasionally fraught discussions and agreements, the legislation received final approval from the European Council more than three years later. The final 144-page text sets out a risk-based framework for regulating the technology, imposing obligations on stakeholders across the AI distribution chain.
The first to apply are the prohibitions on AI systems presenting ‘unacceptable risks’. From 2 February 2025, organisations will have to cease certain AI practices, including those that deploy purposefully manipulative or deceptive techniques and carry out social scoring.
Norton Rose Fulbright partner Marcus Evans said stakeholders should prioritise focusing on the prohibitions, as fines for non-compliance can reach up to 7% of global revenue. Obligations on AI literacy also apply in 6 months, Evans added – “organisations need to ensure a sufficient level of AI literacy in their staff dealing with AI,” he said.
“This will also be important as a foundation for robust AI governance programmes and ensuring that organisations are equipped to comply with any obligations that apply to them.”
Rules for general-purpose AI models and requirements for member states to designate competent authorities will apply a year from today. Most of its remaining provisions will then apply in two years, but the act’s recital encourages providers of high-risk AI systems to start complying with the regulation on a voluntary basis during the transitional period.
Experts told Lexology PRO that a lack of clarity in certain provisions may make compliance challenging for organisations.
“Lawmakers were – understandably – keen to get the AI Act over the line quickly, but this has left lots of unanswered questions,” Marcus said. The rapid development of the sector itself alongside the formation of the regulation had added urgency to the process, he said.
Marcus noted that uncertainty at this stage includes the definition of an AI system and when a system will be considered high-risk. “These are really important issues for organisations to understand which – if any – of the AI Act’s obligations apply to them.”
AI Act article 3(1)
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
White & Case partner Tim Hickman also noted that the AI system definition “is almost impossible to interpret with any degree of precision” due to the use of open-ended and permissive terms including ‘may,’ ‘can,’ and ‘such as’.
“[I]t seems that there will be intense debate (and potentially litigation) before we have certainty on what constitutes an AI system,” he said.
Hickman further noted that there is ambiguity in the legislation’s extraterritorial application.
Article 2 sets out that the regulation applies to third-country providers and deployers if their AI system outputs are used in the union; recital 22 notes that it applies if the output ‘is intended’ to be used in the EU.
“It is therefore unclear whether providers and deployers outside the EU are subject to the EU AI Act where they do not intend the use of their output in the EU,” Hickman said. “Once again, it appears that this lack of clarity will inevitably lead to business uncertainty and potentially litigation.”
The experts further noted that organisations must monitor the overlap between the requirements of the AI Act and other EU-level or local legislations.
“Other rules must also be considered to assess the compliance when relying on AI technology,” said Linklaters partner Tanguy Van Overstraeten “These include intellectual property, data protection or human rights.”
Last week, the EU AI Office launched a consultation on its first code of practice which will deal with general-purpose AI models under the act. The office, established by the AI Act, is responsible for ensuring consistent enforcement of the law by the member states, establishing codes of practice for the industry, and promoting AI development and innovation.