The Digital Omnibus isn’t a way to just buy time. Is Parliament rewriting the AI Act’s rules?

For months, the European Commission’s Digital Omnibus proposal was framed as a pragmatic reset. It is a way to give businesses breathing room while standards, guidance and enforcement structures caught up with the ambition of the EU AI Act.

That framing is now looking very different.

The European Parliament’s Committee on Legal Affairs (JURI) has published its draft opinion on the Omnibus and, as Oliver Belitz has pointed out in his recent analysis of the JURI text, Parliament is signalling that legal certainty matters more than flexibility.

The Digital Omnibus is becoming a battleground over the substance of the AI Act itself, its timelines, its scope, and its underlying philosophy. For businesses already struggling with AI Act readiness, this shift has real consequences.

From “floating compliance” to fixed dates: Parliament draws a line

One of the most consequential changes flagged by JURI is its rejection of the Commission’s proposed “floating” application dates for high-risk AI obligations.

The Commission wanted to link compliance deadlines to the future availability of harmonised standards with long-stop dates as a safety net. In theory, this was meant to avoid forcing companies to comply with requirements before they knew what compliance looked like.

In practice, as Belitz warned, it risked creating a legal trap of obligations triggered by a moving target. JURI agrees and instead, Parliament is pushing for hard, immovable deadlines:

  • 2 December 2027 for high-risk AI systems under Annex III. These are purpose-based systems such as recruitment, creditworthiness, and scoring.
  • 2 August 2028 for high-risk AI safety components under Annex I. These are product-based systems such as machinery, vehicles and medical devices.

What this means for businesses

  • The era of “wait and see” is ending. Even if standards lag, the compliance clock is now ticking.
  • Providers of high-risk systems should plan on the assumption that the rules will apply on these dates regardless of guidance delays.
  • Compliance strategies based solely on harmonised standards will be risky. Organisations will need defensible, principles-based compliance positions well before 2027.

What this means for the AI Act

Parliament is making a political choice in favour of clear, fixed rules over regulatory flexibility, pushing back on the Commission and setting up a more difficult trilogue.

Closing the “deepfake gap”: Parliament moves where the commission didn’t

JURI’s draft opinion also addresses an omission that many civil society groups have been highlighting for months, which is non-consensual sexualised deepfakes.

While the Commission’s Omnibus proposal stayed silent on this issue, JURI proposes adding a new explicit prohibited practice. According to JURI, the Commission missed its own deadline to assess this risk and Parliament is now stepping in directly to protect fundamental rights.

This move reinforces that Parliament is willing to expand prohibited practices even at this late stage. Expect more scrutiny of generative AI use cases during trilogue negotiations.

AI literacy: Parliament will not let this become optional

One of the most controversial Omnibus proposals was the Commission’s attempt to water down and shift the obligation to ensure AI literacy away from providers and deployers, and onto Member States merely “encouraging” it. JURI has deleted that proposal entirely.

Parliament insists on retaining a direct, enforceable duty on organisations. This aligns closely with concerns raised by the EDPB and EDPS, who warned that removing employer responsibility would turn AI literacy into a vague aspiration rather than a concrete compliance requirement.

Regulatory fatigue meets regulatory reality

Many organisations are running out of regulatory energy. Years of overlapping digital regulation, from GDPR to the DSA and DMA, and now the AI Act have created real compliance fatigue. That strain has been made worse by delayed technical standards, uncertainty around how the AI Act interacts with existing data protection law, and repeated shifts in implementation timelines.

The Digital Omnibus is the Commission’s attempt to respond to that reality. It seeks to reduce pressure on organisations by centralising enforcement for certain AI systems in the new EU AI Office, easing registration requirements for some lower-risk uses, extending simplified compliance routes to SMEs, and making limited adjustments to how high-risk AI systems can be tested in real-world conditions.

But Parliament’s response shows that simplification has clear limits. Where issues of fundamental rights, legal certainty or democratic oversight are at stake, flexibility gives way to firmness.

This position is shaped by a growing credibility gap. The Commission failed to deliver guidance on how to identify high-risk systems under Article 6 on time. Standards bodies missed their own development timelines. National authorities are still gearing up for enforcement. And yet, businesses are expected to prepare for AI Act compliance as early as August 2026.

That tension explains the Digital Omnibus but it also fuels scepticism. As one AI Act negotiator has warned, delaying enforcement without certainty that new rules will pass risks deepening uncertainty rather than resolving it.

Where does this leave businesses now?

Three practical conclusions stand out:

Assume the AI Act will land hard. Parliament is resisting dilution. Compliance strategies should be built on the assumption that obligations will apply even if guidance lags.

Plan for dates, not documents. Harmonised standards are helpful, but fixed deadlines are now the real anchor. Waiting for clarity is no longer viable.

Treat governance, literacy and risk assessment as core infrastructure. These are not negotiable elements of compliance and they are unlikely to be relaxed in trilogue.

Rather than simplifying AI regulation, the Digital Omnibus has brought underlying tensions to the surface. JURI is signalling that the AI Act should be treated as a core legal framework that protects fundamental rights and demands clear, enforceable obligations, not adjustable timelines.


When data thinks, the intersection of GDPR and AI, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. Get the guide here.