The EU’s landmark AI Act has just undergone its first major political recalibration.
After intense lobbying from industry groups, growing concerns over competitiveness, and warnings that Europe risked regulating itself out of the AI race, the Council of the European Union and the European Parliament have reached a provisional agreement on the Omnibus package to streamline parts of the AI Act. The move delays some of the Act’s most demanding obligations by up to 16 months, particularly those governing high-risk AI systems.
While the headline is that the EU AI Act has been delayed, the reality is more nuanced and is less about deregulation and more about regulatory repositioning.
Brussels blinks
The most significant change is the postponement of obligations for Annex III high-risk AI systems. Stand-alone high-risk systems will now fall under the regime from 2 December 2027 rather than August 2026, while AI embedded within regulated products such as medical devices, machinery, lifts, toys, and industrial equipment will not face full application until 2 August 2028.
The EU says the delay is practical rather than ideological. Regulators acknowledge that businesses still lack many of the harmonised standards, technical guidance, and implementation tools they need to comply. Industry groups had argued that the original timelines created a real risk of legal uncertainty and duplicate compliance obligations, especially in heavily regulated sectors already subject to product safety frameworks.
The Omnibus package attempts to untangle some of those overlaps. One of the most consequential changes is the new mechanism for managing conflicts between the AI Act and sector-specific legislation. The Machinery Regulation, notably, will no longer be directly subject to overlapping AI Act provisions, with the European Commission instead empowered to use delegated acts to integrate AI-related safety requirements where necessary.
For industrial and manufacturing businesses, this matters. Many had warned they were heading toward a compliance maze in which identical AI systems could simultaneously trigger obligations under multiple regulatory regimes. The revised framework signals a more pragmatic and business-conscious approach from Brussels.
A turning point for Europe’s AI strategy?
The political significance here is bigger than compliance simplification. The AI Act was originally framed as Europe’s opportunity to become the world’s AI regulator. It was going to be a GDPR-style global benchmark for AI governance.This delay suggests Brussels is becoming more aware of criticism that Europe is better at regulating new technologies than turning them into globally competitive businesses.
The pressure has been building for a while. European technology and industrial leaders warned that excessive regulatory complexity risked undermining Europe’s competitiveness in the global AI race.
The Omnibus package reflects a growing recognition within the EU that innovation policy and regulatory policy can not operate separately.
What has not been delayed?
Despite the headlines, some parts of the AI Act are already in force. Certain AI practices are already banned, including manipulative AI systems and tools that create non-consensual intimate images, “nudification” content, or AI-generated child sexual abuse material. Businesses are also already expected to ensure staff have appropriate AI literacy and training. Rules requiring labels for AI-generated content are still coming, now with a deadline of December 2026. And importantly, expectations around responsible AI use, oversight, accountability, and governance have not gone away.
In fact, the Omnibus package strengthens several safeguards rather than weakening them. The agreement reinstates the “strict necessity” threshold for processing sensitive personal data in bias detection and mitigation exercises. Providers claiming exemptions from high-risk classification will still need to register systems in the EU database. The powers of the AI Office are also being reinforced, even while certain sectors, including finance, law enforcement, and the judiciary, remain under national supervisory oversight.
What UK businesses need to know
Although the UK is pursuing its own more principles-based AI governance model rather than replicating the AI Act directly, British firms remain deeply exposed to the EU framework. Any UK company placing AI-enabled products or services on the EU market, processing EU customer data, or supporting EU-facing operations may still fall within scope.
Also, many of the operational obligations businesses associate with “AI regulation” are already embedded within existing UK frameworks.
Financial services firms, for example, are already operating under the expectations of the FCA, the SMCR, Consumer Duty obligations, and UK GDPR requirements. If a bank, insurer, wealth manager, or fintech firm is using AI to support credit decisions, suitability assessments, fraud monitoring, customer interactions, or risk profiling, regulators already expect explainability, accountability, governance, and evidence of oversight.
The AI Act delay does not suspend those responsibilities or remove litigation risk, reputational risk, or accountability.
The businesses that will be best prepared by December 2027 will not be the ones scrambling to pull together paperwork at the last minute. They will be the organisations using the next 18 months to properly understand how they use AI, track how data moves through their systems, review third-party AI suppliers, improve oversight and record-keeping, and clearly define who is responsible for AI governance across the business.
That preparation is becoming increasingly important because regulators themselves are evolving. The Omnibus agreement is part of a much broader EU “simplification agenda” aimed at reducing administrative burdens while preserving core protections. It reflects a regulatory shift from theoretical rulemaking toward practical implementation.
What’s next?
The provisional agreement still requires formal endorsement by both the Council and the European Parliament, followed by legal and linguistic review. Formal adoption is expected within weeks.
Boards should start asking questions like where is AI already being used inside the organisation? Which systems could later fall into high-risk categories? Can decisions be explained and audited? Who is accountable when AI outputs go wrong? How are vendors being assessed? What evidence exists that the organisation understands and governs its AI risks?
The EU may have delayed some of its AI rules, but the bigger shift toward AI accountability and governance is already well underway.
How to build a compliant AI programme
Download it here →