The long-awaited EU Artificial Intelligence Act has been adopted. Lexology PRO identifies the key provisions, including AI literacy requirements from February 2025 and mandatory technical requirements for high-risk models.

www.shutterstock.com/g/PopTika
The world’s first comprehensive artificial intelligence (AI) law, the EU AI Act 2024 (the Act), was published in the EU’s Official Journal on 12 July 2024, and will take effect 20 days from publication on 1 August 2024. The legislation establishes a risk-based framework for regulating AI that includes incrementally more stringent obligations for AI applications across four risk levels, including unacceptable risk, high risk, limited risk and minimal risk.
The Act will the groundwork for similar legislation in other jurisdictions across the globe – for example, according to the vice president for global privacy at the think tank Future of Privacy Forum, some US states were “inspired” by it when drafting their own rules. However, the Act has faced several criticisms, including that the more scrutinous provisions apply only to a narrow subset of organisations, and its exemptions for remote biometric identification (RBI) use by law enforcement.
The Act will apply in its entirety from 2 August 2026, but some provisions will be implemented in phases, with various compliance deadlines between now and then — and some even later. The extraterritorial nature of the Act means that entities established outside the EU that are offering AI systems or services within the EU must also comply with its provisions.
Lexology PRO outlines the main provisions of the AI Act below and provides some compliance tips for businesses.
Risk-based approach
The Act introduces a risk-based approach to AI systems, which identifies four different risk profiles. Different compliance standards apply to each of the risk profiles. Chapters I and II of the Act, setting out the general provisions of the legislation and the list of banned AI systems, will come into effect on 2 February 2025.
Unacceptable risk
The Act bans certain applications, which it classifies as posing “unacceptable risks,” including:
- biometric categorisation systems, such as RBI systems, that use sensitive characteristics (e.g. political, religious, philosophical beliefs, race, sexual orientation);
- facial recognition databases created with untargeted scraping of facial images from the internet or CCTV footage (such as that highlighted in the string of regulatory and litigation action against Clearview AI’s alleged data scraping);
- emotion recognition in schools and in the workplace;
- “social scoring” (using metrics to establish how upstanding someone is);
- AI systems that manipulate human behaviour; and
- AI used to exploit people’s vulnerabilities.
The Act includes some narrow exceptions for the use of RBI in publicly accessible places for law enforcement purposes, including in the case of threat of a terrorist attack or to track down suspects of particular offences, which some MEPs have criticised as intrusive.
The Act does not place an outright ban on AI emotion recognition, a technology that has been a source of concern for AI researchers due to its “shaky foundations.” The Act bans the use of emotion recognition technology only in education and the workplace. During negotiations, the EU Council was not willing to accept the ban on emotion recognition use relating to migration control, which has been criticised in the media as a human rights infringement.
Additionally, the Act does not ban deepfakes, a form of AI that is currently a key concern to business. Instead, the Act requires transparency and clear disclosures from deepfake creators that will apply from 2 August 2025. The fact that deepfakes have not been banned or criminalised has been criticised by the media, with some outlets saying that transparency alone might not be enough to address the malicious potential of deepfakes.
High risk
The Act sets out clear obligations for AI systems classified as “AI systems that create a high risk to the health and safety or fundamental rights of natural persons.” Examples of high-risk systems under the Act include:
- recruitment technologies;
- medical devices;
- biometric identification; and
- AI systems used to influence the outcome of elections or voter behaviour.
Compliance obligations for high-risk systems include:
- mandatory fundamental rights impact assessments (FRIA);
- conformity assessments;
- data governance requirements, including documenting the origin of the data and the purpose of the data collection, and annotating, labelling, testing, and updating datasets;
- registration in an EU database;
- risk management and quality management systems, which would involve identifying foreseeable risks that the system could pose to health, safety, or fundamental rights of users, post-market data monitoring, and the adoption of targeted risk management measures;
- transparency requirements, including that the system must be accompanied by instructions for use that set out its the capabilities, limitations, intended purposes; and
- human oversight.
A subset of high-risk AI systems, set out under article 6(1), have been given the most generous compliance deadline, with 36 months after entry into force — until 2027 — allowed for them to meet their obligations.
Conformity assessment
The conformity assessment regime for high-risk systems, set out in Chapter 2, Section 2 of the Act, must demonstrate whether the application conforms to certain requirements, including having a risk management system and processes, data governance structures, record-keeping systems, and human oversight. This would involve ensuring that established quality management and post-market monitoring systems align with the provisions, and thoroughly reviewing technical documentation to ensure it meets the requirements.
High-risk systems must be subject to a conformity assessment before being placed on the market or being put into service. According to recital 125 of the Act, the assessments should, at least initially, be carried out by the AI provider until third-party conformity assessments are developed to a high standard.
FRIAs
The obligation to carry out FRIAs applies to high-risk AI systems operated by public companies, some private operators providing public services, and operators deploying certain high-risk systems, including banking or insurance entities. The aim of the FRIA is for the deployer to identify the individuals or groups likely to be affected by the system, risk management strategies, and the foreseen environmental impact of the system. The FRIAs have been criticised by tech experts, as the obligation only applies to public sector bodies and a narrow subset of private bodies, leaving many unprotected.
Additionally, it is not yet clear if the FRIAs will be more than a tick-box exercise, as the Act does not make the deployment of a high-risk AI system conditional on the FRIA being reviewed.
Limited risk
AI systems falling into the “limited risk” category include deepfakes, chatbots and general purpose AI systems (GPAI), which include large language models (LLMs). Their compliance obligations focus on transparency. For example, in order to increase transparency on the data that is used in the training of GPAIs, including text and data protected by copyright law, system operators will need to create and publicly publish a detailed summary of the content used for training the GPIA.
GPAIs with “systemic risk”
While some GPAIs are considered to be of “limited risk”, some can be classified as having a “systemic risk” if it disrupts critical sectors, has serious consequences to public health and safety, impacts democracy or public security, or disseminates discriminatory content.
GPIAs that present a “systemic risk” should continuously be assessed and the risks mitigated, for example by putting in place risk-management policies including accountability and governance processes, implementing post-market monitoring, taking appropriate measures along the entire model’s lifecycle, and cooperating with relevant actors along the AI value chain.
The Act outlines that providers of GPIA models that pose a “systemic risk” should ensure cybersecurity protection for the model and its physical infrastructure along its entire lifecycle through operational security measures for information security, specific cybersecurity policies, adequate technical and established solutions, and cyber and physical access controls.
Under the Act, obligations for General Purpose AI models, detailed in Articles 50-55, are enforceable 12 months after the Act enters into force, in August 2025.
In the interim, Article 56 of the Act outlines the Codes of Practice (CoP) as a placeholder mode of compliance to bridge the gap between GPAI model provider obligations coming into effect in August 2025 and the adoption of harmonised standards, which may take three years or more. While not legally binding, compliance with measures set out in the CoP by GPAI model providers will serve as a presumption of conformity with GPAI model provider obligations until standards come into effect. These obligations include providing:
- providing technical documentation to the AI Office and National Competent Authorities;
- providing relevant information to downstream providers that seek to integrate their model into their AI or GPAI system (eg. capabilities and limitations);
- summaries of training data used;
- policies for complying with existing EU copyright law.
For GPAI models with systemic risk (models trained above the 10 floating point operations threshold), further obligations include state of the art model evaluations, risk assessment and mitigation, serious incident reporting, including corrective measures, and adequate cybersecurity protection.
Providers that fail to comply with the Codes of Practice will have to prove compliance to the above obligations to the Commission by alternative means, which is likely to be more burdensome and time-consuming. The CoP of practice should be ready by 2 May 2025, according to the regulation amending the Act published on 12 June 2024. For more information on the CoP, see here.
Minimal or no risk
AI systems that are not classified as banned, “high risk” or “limited risk” will not be subject to comply with the Act. The focus of the AI Act is on the high-risk and limited-risk categories.
Governance
Under the Act, AI systems will be supervised by national competent authorities, which will form a European Artificial Intelligence Board to ensure consistent application of the law. An AI Office has been launched within the European Commission to drive the implementation of the Act.
Non-compliance with the prohibition of AI systems that pose an “unacceptable risk” could result in fines of up to €35 million (just under US$38 million) or up to 7% of a company’s total worldwide annual turnover for the preceding financial year – whatever is higher.
Companies that provide misleading, incorrect, or incomplete information about their AI systems could face fines of up to €7.5 million (just over US$8 million) or 1% of their total annual turnover. Chapters VII and XII setting out governance obligations and penalties under the Act respectively will apply from 2 August 2025.
AI literacy
Providers and deployers of AI systems will also have to ensure a sufficient level of AI literacy of their staff and anyone operating and using AI systems on their behalf by February 2025. This includes ensuring staff have the appropriate level of skills, knowledge and understanding to make an informed deployment of AI systems and to gain awareness of the opportunities and risks of AI.
In the regulation amending the Act published on 12 June 2024, it is stated that the European Artificial Intelligence Board should support the Commission to promote AI literacy tools, and that the European Commission and the Member States should facilitate the drawing up of voluntary codes of conduct to advance AI literacy.
Compliance steps
Mandatory technical requirements
Companies that have products or services that are classified as “high risk” under the Act must prepare for the incoming mandatory technical requirements, which will include:
- creating a risk management system;
- increased data governance to ensure that datasets are relevant, representative, and accurate; and
- record keeping requirements, including recording the period of use for the system, the identification of the natural persons involved in the verification of the results, and the reference database and input data used, as set out in Chapter 3, Article 12; and
- transparency requirements, including that the system must be accompanied by instructions for use that set out its the capabilities, limitations, intended purposes, and more, as set out in Chapter 3, Article 13.
Continuous oversight and risk management
The Act emphasises that continuous human oversight, risk assessment and risk management are key to developing and deploying AI systems. It states that human oversight processes must be built into the design of high-risk AI systems, and such systems must be overseen during its whole lifecycle by professionals that have "the necessary competence, training and authority to carry out that role."
For GPAIs, the Act states that providers should "continuously assess and mitigate systemic risks, including for example putting in place risk-management policies" that include accountability processes.
Create internal AI ethics policies
Companies should create robust internal AI ethics policies to ensure compliance with the Act’s emphasis on the ethical deployment of AI systems. Many companies, such as Google, Microsoft, BMW, and Deutsche Telekom have developed formal AI policies with commitments to safety, fairness, diversity, and privacy. Google’s policy includes building and testing a model for safety before use, and both Microsoft and BMW’s policies feature transparency and accountability requirements, including the publication of transparency reports.
Some companies, like the US Home Loan Mortgage Corporation, have appointed chief ethics officers to oversee the introduction and enforcement of such policies, in many cases supporting them with ethics governance boards. Additionally, the number of companies with a designated head of AI position has almost tripled globally in the past five years, according to Linkedin data, and in March 2024, the US White House announced that federal agencies are required to designate chief AI officers “to ensure accountability, leadership and oversight” of the technology.
Organisations can also consider instating a generative AI policy to train their employees on the emerging risks relating to the large language models, which can include risk rating, recordkeeping and content labelling.
Prepare for obligations in other jurisdictions
The Act is the first of its kind and will lay the groundwork for similar legislation in other jurisdictions across the globe – for example, according to the vice president for global privacy at the think tank Future of Privacy Forum, some US states were “inspired” by it when drafting their own rules. Companies should build AI models and governance structures with the cognisance that future legislation will include accountability and transparency requirements to protect consumers, as featured heavily in the Act.