Breaking news: EU lawmakers agree on new direction for AI Act compliance

The European Parliament has just made its move on the future of the EU AI Act. In a decisive vote, MEPs backed a package of amendments designed to “simplify” the law. At the same time, EU Member States in the Council of the European Union have already agreed their own position. Together, this sets the stage for the final phase of negotiations with the European Commission.

But the reality is that this isn’t simplification in the way many businesses had hoped. But it does offer clarity. 

The biggest shift: certainty over flexibility

For months, companies have been stuck in limbo, unsure when exactly they would need to comply with the most demanding parts of the AI Act, especially rules on high-risk systems.

That uncertainty is now being resolved. The Parliament has backed fixed dates for compliance, delaying the most complex obligations but making the timeline far more predictable. High-risk AI systems in sensitive areas like employment, education, law enforcement, and critical infrastructure are now expected to fall under full obligations from December 2027. Systems tied to existing product safety laws will follow in August 2028. Meanwhile, transparency rules, like watermarking AI-generated content, are pushed to November 2026.

This is a major shift away from the earlier idea that compliance would depend on when technical standards were ready. Instead, lawmakers are drawing a clear line in the sand. For businesses, that’s a double-edged sword. You have more time but less room to delay.

New risks: generative AI is in the spotlight

One of the most eye-catching changes is the proposed ban on so-called “nudifier” systems, AI tools that generate explicit images of real people without consent.

This signals a broader regulatory direction that puts generative AI front and centre in enforcement thinking. The ban does come with nuance. Systems that include effective safeguards to prevent misuse may still be allowed. That puts the burden on developers to prove their controls actually work.

Essentially, it’s no longer just about what your AI is designed to do but what it could realistically be used for.

Quiet but important changes 

Some of the most impactful updates are less headline-grabbing. Lawmakers are opening the door to using personal data, including sensitive data, to detect and fix bias in AI systems, provided strict safeguards are in place. This is a significant development, especially for organisations struggling to reconcile fairness obligations with data protection rules.

Support is also being extended beyond SMEs to small mid-cap companies, meaning a wider group of businesses may benefit from lighter requirements and reduced penalties.

At the same time, there’s a clear effort to avoid duplication. Where AI systems are already regulated under existing sectoral laws, like medical devices or product safety, AI Act obligations may be applied more lightly.

All these changes show an effort to make the law more workable in practice, even as its core obligations remain firmly intact.

Don’t relax yet

It’s tempting to see these delays and adjustments as a reason to pause but none of these changes are law yet. They now move into “trilogue” negotiations between Parliament, Council, and Commission, where the final text will be agreed. And while there is growing alignment between the institutions, nothing is guaranteed.

If negotiations falter or timelines slip, the original AI Act deadlines still apply, including the key date of August 2026.

That means businesses are now operating in a split reality of a likely future with delayed deadlines, and a legal present where those delays don’t yet exist.

What this really means 

The AI Act is no longer an abstract future risk. It is a structured, time-bound regulatory framework that is rapidly taking shape. Organisations should be using this period to get ahead. That means identifying where AI is being used across the business, understanding which systems could fall into high-risk categories, and putting governance structures in place now.

It also means investing in AI literacy. Even though there have been attempts to soften this requirement, organisations will be expected to understand and manage the risks of the systems they deploy.

And perhaps most importantly, businesses need to start documenting decisions. Why a system is classified as low risk. How bias is being addressed. What safeguards are in place. When enforcement comes, that evidence will matter.

The EU isn’t backing away from AI regulation. It’s refining it. The latest developments show that there will be fewer grey areas, clearer deadlines, and stronger expectations on organisations to act responsibly.

Yes, there is more time. But there is also far less ambiguity about what’s coming.

The guide, When Data Thinks, explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Download it here.