AI regulation, recalibrated: What the EU’s latest move means 

The EU has taken a significant step toward reshaping its AI regulatory landscape. The European Parliament’s LIBE and IMCO committees approved their negotiating position on the AI Omnibus and it’s all about simplifying the rules, reducing regulatory friction, and giving businesses more clarity.

This is a strategic recalibration on the AI Act that could impact how innovation and compliance coexist in Europe for years to come.

From complexity to clarity

The AI Act, in its original form, was ambitious and complex. Businesses raised concerns about overlapping requirements, unclear timelines, and the practical challenges of compliance, particularly for high-risk AI systems.

The AI Omnibus, part of the EC’s wider Digital Omnibus’ simplification package, aims to fix that.

At its core, the proposal focuses on:

  • simplifying regulatory requirements
  • reducing duplication with sector-specific laws
  • introducing clearer and more realistic timelines
  • providing legal certainty for companies developing AI-enabled products

One of the most important changes is the “stop-the-clock” mechanism, which delays key obligations under the AI Act. This creates what policymakers are calling “breathing space” for companies to prepare properly.

The key changes businesses should know

Delayed deadlines for high-risk AI

The most immediate and biggest change is the extension of compliance timelines:

  • High-risk AI systems:
    → December 2027 or August 2028 (depending on category)

This reflects a practical reality because many of the technical standards and guidelines required for compliance aren’t ready yet.

This means businesses now have more time to build compliant systems but they shouldn’t slow down preparation.

Sectoral rules take priority

A major win for industry is the move to avoid overlapping regulations. If your AI system is already governed by sector-specific legislation like medical devices or product safety, those rules will take precedence over the AI Act in certain areas.

This could mean reduced compliance duplication, lower legal costs and clearer regulatory pathways. This is especially important for companies operating in highly regulated sectors like healthcare, manufacturing, and financial services.

More time for generative AI labelling

The Omnibus also addresses one of the most uncertain areas which is labelling and watermarking AI-generated content.

  • new proposed deadline: November 2026
  • industry is pushing for even longer (up to 12 months grace beyond that)

This means companies using generative AI for content, marketing, or customer interaction among other things, get more time. But uncertainty remains around how exactly these requirements will be enforced.

New ban on “nudifier” AI tools

The proposal introduces a clear prohibition on AI systems that generate non-consensual intimate images of real people.

This means stronger guardrails around harmful AI use and increased accountability for developers and platforms. This indicates that while the EU is easing compliance burdens, it is not stepping back from high-risk and harmful use cases.

Support for growing companies

Recognising the challenges faced by scaling businesses, the proposal extends certain support measures to small mid-cap enterprises (SMCs), companies that have outgrown SME status but still face resource constraints.

This could mean a smoother transition for growing companies navigating AI compliance.

Why this matters

The AI Omnibus is not rolling back regulation but it is rebalancing it. For businesses, this creates a more workable environment but also a longer period of uncertainty while negotiations continue.

Essentially, the AI Omnibus creates a more favourable landscape for businesses but that is not without trade-offs. Companies do gain time to prepare for compliance, easing the immediate pressure that many had feared under the original timeline. The effort to reduce regulatory overlap, particularly by prioritising sector-specific legislation, also promises a more streamlined and cost-effective compliance process. At the same time, clearer guidance on how AI systems will be classified, especially in terms of risk, should help organisations plan with greater confidence.

But this breathing space comes with a degree of uncertainty. Until the final rules are agreed, businesses must navigate a shifting legal landscape, where key details are still being negotiated. There is also the possibility of divergence between EU institutions during the trilogue process, which could further complicate the picture. And while deadlines for certain requirements, such as those affecting generative AI, have been pushed back, clarity on how these rules will ultimately work remains limited.

What happens next?

The process is moving quickly. The next phase of the process will begin with a plenary vote in the European Parliament, expected on March 26. If approved, this will open the door to trilogue negotiations between the Parliament, the Council, and the EC. The aim is to reach a final agreement by spring 2026, allowing the revised framework to move forward without delay. If negotiations progress swiftly, the proposed “stop-the-clock” mechanism could be implemented before the original AI Act deadlines take effect, formally introducing the extended timelines and giving businesses the additional time they need to prepare for compliance.

This timeline is critical. If delays occur, businesses could still face the original AI Act deadlines.

What should businesses do now?

Despite the delay, this is not the time to pause AI compliance efforts. Instead, companies should use this window strategically.

Continue AI risk mapping

Identify whether your systems fall into high-risk categories, especially in:

  • HR and employment
  • education
  • financial services
  • critical infrastructure

Monitor regulatory developments closely

The details are still evolving especially around:

  • generative AI labelling
  • technical standards
  • AI office guidance

Align with sector-specific regulations

If your product is already regulated such as medical devices, start aligning AI compliance within those frameworks.

Build governance now

Use the extra time to:

  • implement AI policies
  • strengthen documentation
  • develop internal audit and monitoring processes

The timeline may have shifted, but expectations haven’t disappeared. Companies that treat this as a delay rather than an opportunity risk falling behind.

Smart regulation or missed opportunity?

The AI Omnibus signals a more pragmatic EU approach, one that acknowledges the realities of implementation while maintaining strong safeguards.

If executed well, it could boost innovation, reduce compliance friction and strengthen Europe’s position in the global AI race. But success depends on clear and consistent agreement between EU institutions. For now, businesses should treat this moment as a rare regulatory pause not to relax but to prepare smarter.

When Data Thinks is a guide that explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Get it here.