AI Act rethink could centralise enforcement but relax developers’ training compliance burden

Updated as of: 13 November 2025

The European Commission may exempt small to medium-sized AI companies from some rules, allow sensitive data processing for certain training and centrally regulate large platforms.

Shutterstock.com/Giannis Papanikos

The commission is expected to formally propose changes to the landmark AI Act as part of a Digital Omnibus package on 19 November, alongside GDPR amendments and other measures.

If cleared by the EU institutions, the amendments would exempt SMEs from documentation rules and relax their potential exposure to fines. The commission also wants to grant companies a GDPR legal basis for sensitive data processing, assign enforcement and regulatory powers to the commission’s AI Office, and make AI literacy a recommendation instead of a requirement. The commission said these changes follow stakeholder consultation that “revealed implementation challenges” which could jeopardise the effective application of the law. 

First introduced earlier this year, the Digital Omnibus was pitched as simplifying the EU’s digital legislation to boost European competitiveness, in response to the landmark Mario Draghi report.

“The most notable thing from the leak is the shift of thinking inside the EU,” Keystone partner James Tumbridge told Lexology PRO. “The EU loves to be the first to regulate, but regulation of a new fast-moving technology can stifle adoption and innovation.” 

The commission’s planned new legal basis for AI system providers to process special category data is limited to detecting and correcting bias, subject to safeguards such as strict access controls, record-keeping and a requirement to delete data once bias has been detected.

DLA Piper partner Gareth Stokes said the changes could “lead to better and more widespread compliance” by streamlining regulation. 

“We know that there is a real desire for clarity in relation to the EU's digital regulation. The issue for many organisations isn't regulation itself, but a lack of clarity and certainty in how regulation will be applied,” Stokes said. “Perhaps the biggest win of all will be making the suite of rules more consistent and easier to understand.”

The AI Act came into force in August 2024 but many of its provisions and technical standards will not apply until 2026 or later. The draft amendment suggests an additional pause that would give companies more time to comply with requirements for labelling content generated by high-risk generative AI systems that are already on the market before the August 2026 implementation date. 

The commission said this would give companies a reasonable time to adapt their practices while avoiding market disruption.  

“This will be an important change – and probably a needed one – given the delays on the commission’s side in publishing key technical standards, codes of practice and other documents that are expected by companies to properly implement the new rules,” said Clifford Chance partner Dessislava Savova. 

But Covington & Burling partner Daniel Cooper warned that exempting companies from transparency and content-labelling duties could attract scrutiny as users may not know when content is AI-generated. 

In an open letter sent to the commission on 11 November, privacy groups including noyb, European Digital Rights and the Irish Council for Civil Liberties raised concerns that the changes would weaken the commission's appetite for enforcement and allow AI providers too much leeway. 

The draft proposal aims to scale back AI literacy requirements, despite introducing an obligation in February this year for AI system providers and deployers to ensure staff have sufficient literacy and training to use the systems safely. 

“There has been quite a lot of effort from the commission to promote AI literacy but now under the proposal, this would no longer be a requirement but something the commission and member states would only recommend,” Cooper said. 

Cooper added that this “surprised” him, as the commission “seemed to put their heart into training people who are using the AI systems, but now see it as too much of an administrative burden.”

Currently, the AI Office helps implement and monitor the AI Act’s rules in co-ordination with other national authorities which handle most enforcement for AI systems in their countries. If the draft amendments pass, the office will gain centralised oversight with new powers to directly regulate and enforce against AI systems deployed by Very Large Online Platforms (VLOPs) and Very Large Search Engines (VLOSEs), as defined in the Digital Services Act, including imposing fines. “It looks like the commission is really trying to assert itself with a degree of enforcement power,” Cooper said. 

He noted that if the commission aims to make the AI Act “actually perform as expected and be more fit for purpose” through the proposed amendments despite relaxing certain parts, then it could actually strengthen it in the long term. 

“There’s plenty of opportunity for the commission to also supplement secondary legislation to build the law up in certain respects. The AI Act is probably the most important chapter of the book, but there’s other chapters that need to be written in terms of seeing the AI Act really come to life,” he said.