AI Act reforms stall as EU misses deal, with August deadline looming

After more than 12 hours of negotiations in Brussels, EU lawmakers have walked away without agreement on proposed changes to the landmark AI Act. What was expected to be a technical exercise has instead unravelled into a political and regulatory standoff. It’s one that leaves businesses facing a strange reality where the clock is still ticking and the rules may arrive before clarity does.

At the centre of the dispute is the EC’s Digital Omnibus, an attempt to simplify overlapping digital laws and ease the burden on companies struggling to keep pace with global competitors. The intention was pragmatic but the outcome, so far, is not.

One key disagreement

The negotiations broke down over one issue: Should AI systems already governed by sector-specific safety laws, such as those used in medical devices or industrial machinery, also fall under the AI Act?

Some lawmakers, backed by influential member states, argued that requiring compliance with both frameworks would create duplication and stifle innovation. Others saw the proposed exemptions as a fundamental weakening of the Act itself, potentially carving out large swathes of high-risk AI from meaningful oversight.

That disagreement proved irreconcilable, at least for now.

The clock is still ticking

What makes this impasse especially significant is the timing. The AI Act is already law. And unless a political agreement is reached soon, its most consequential provisions, which are those regarding high-risk AI systems, are set to take effect on 2 August 2026 as originally planned.

This creates a strange and risky situation. Policymakers are still debating whether to soften or delay the rules, while businesses are expected to prepare for full compliance. For many organisations, especially those operating across multiple EU markets, the lack of clarity is frustrating and destabilising.

A risk of fragmentation?

There is also a growing risk of fragmentation. Even if some national regulators are not fully ready to enforce the rules by August, others are moving ahead with preparations. This raises the prospect of uneven enforcement across the EU, with companies potentially exposed to scrutiny in some jurisdictions but not others.

For compliance teams, keeping track of these moving parts is becoming an increasingly complicated job and has both financial and reputational repercussions.

What UK businesses need to consider

For UK businesses, the implications are immediate and could be far-reaching. The “Brussels Effect” remains in play. Any UK organisation offering AI-driven products or services into the EU market, or handling EU data, will find itself within scope of the AI Act. As with GDPR before it, the EU is once again setting a global benchmark, and it’s one that UK companies cannot ignore.

At the same time, the UK’s own approach to AI regulation is evolving along a different path. Rather than a single, comprehensive framework, the UK is pursuing a principles-based, sector-led model. While this may appear more flexible, it makes compliance more complex for businesses operating across borders.

Compliance is no longer about meeting one standard, but navigating two distinct, and possibly diverging, regulatory approaches. For many organisations, this will involve governance frameworks capable of satisfying both regimes simultaneously.

Should you “wait and see”?

For months, many organisations have been working on the assumption that enforcement deadlines might be pushed back, buying much-needed time to prepare. That assumption is now looking increasingly uncertain.

With the August deadline still in place unless a deal is reached before then, businesses cannot safely assume that additional time will be granted. Even if a compromise is agreed in the coming weeks, it is unlikely to fundamentally change the AI Act’s core framework, its risk-based classification system, its focus on high-risk use cases, and its emphasis on transparency and accountability.

Any delay, if it materialises, would not remove the need for compliance. It would just shift when enforcement pressure fully takes effect.

A turning point for AI regulation?

Rather than waiting for political certainty, some organisations are treating August 2026 as a fixed point and building their compliance programmes accordingly.

They are mapping where AI is used across their operations, assessing risk levels, embedding governance structures, and preparing for transparency obligations such as AI disclosures and content labelling. 

The collapse of talks in Brussels could be seen as a reflection of a broader tension shaping how AI will be regulated. Europe is attempting to strike a balance between enabling innovation and enforcing safeguards, between reducing bureaucracy and maintaining trust. But that balance is proving difficult to achieve.

Talks are set to resume in May, but until a final text is formally adopted, the 2 August deadline remains legally in force. For compliance teams, that creates a dual reality with a political process still in motion on one side, and a binding regulatory timetable already ticking on the other.

It’s hard not to see the irony here. A piece of legislation designed to bring more clarity and legal certainty to AI regulation is responsible for a great deal of regulatory uncertainty in EU’s AI policy.

How to build a compliant AI programme sets out a practical framework for building and managing AI in a compliant, controlled way. It explains how to identify AI use across your organisation, assess risk, implement governance, and meet evolving regulatory expectations across the UK and EU.