AI regulation in Europe: Enforcement is fragmented, the rules are shifting and businesses are caught in the middle

For many businesses, the EU AI Act still feels like a future problem. Something complex, unfinished, and safely parked behind unanswered questions about regulators, guidance, and enforcement.

That assumption is quickly becoming outdated.

As of January 1st, Finland became the first EU member state with fully operational AI Act enforcement powers. While much of Europe is still debating governance structures, Finland is already able to investigate, audit and fine companies under the Act.

At the same time, the EC is proposing changes to core digital laws, including the AI Act itself, through the Digital Omnibus. And across the EU, most countries have missed the deadline to even name their AI regulators.

The result is a regulatory landscape that is live, fragmented, politically contested and moving faster than many compliance teams expect.

Where the AI Act is right now

The AI Act is not a draft. It is not a proposal. It is in force.

Key obligations have already started to apply, particularly for providers of general-purpose AI models (GPAI) and high-risk AI systems. From August 2025, companies offering large language models must maintain detailed records on how their systems are built and trained. Transparency, risk management, and governance are no longer optional.

By August 2026, full compliance is expected. At that point, penalties can reach up to €15 million or 3 percent of global annual turnover.

The EC has indicated that it will take a pragmatic approach during this transition year, particularly where companies act in good faith and follow voluntary codes of practice. But that should not be mistaken for a grace period. Expectations are rising, not easing.

Most countries missed the deadline. Finland did not.

Under the AI Act, EU member states were required to designate their national authorities by 2 August 2025. Nineteen out of twenty-seven failed to do so. This includes major economies such as Germany, France, Italy, Belgium and Austria.

In some countries, the situation is especially complex. Ireland may end up with eight separate AI regulators. Latvia could have as many as seventeen. In France, there is still no clear answer as to which authority will take the lead.

But lack of designation does not mean lack of applicability.

As French lawyers have already warned clients, the AI Act is directly applicable to EU law. Its obligations can be enforced through courts even where a national regulator has not yet been named. Regulatory silence does not equal legal safety.

Finland offers a stark contrast. It has designated Traficom as a single point of contact, established a Sanctions Board for fines over €100K and committed to AI sandbox rules by February 2026.

The result is clarity. Finnish businesses know who to speak to, what to expect, and how enforcement will work. Elsewhere, uncertainty reigns but enforcement risk remains.

Spain shows what “prepared” looks like

Spain stands out as the other major exception.

It created the Spanish Agency for AI Supervision (AESIA) back in 2023, well ahead of the AI Act’s deadlines. While multiple regulators will still be involved depending on sector and use case, companies at least have an established supervisory framework.

Spanish regulators are also already enforcing adjacent laws. The Data Protection Authority has reminded businesses that it can act immediately against prohibited AI systems that process personal data unlawfully.

It’s clear that countries that moved early created certainty for their businesses. Countries that did not have shifted risk and ambiguity onto companies instead.

Digital omnibus: When deregulation meets reality

Just as businesses are trying to understand and implement the AI Act, the EC has introduced the Digital Omnibus, a package of proposals that would weaken parts of the EU’s digital rulebook, including the AI Act and GDPR.

Civil society organisations have argued that the Omnibus reflects years of intensive Big Tech lobbying and risks hollowing out the very protections the EU spent a decade building. Politico has described it as the possible end of the “Brussels effect,” with Washington now setting the pace on deregulation.

For businesses, this creates an illusion that AI regulation might soften, slow down, or become optional. But that is unlikely to be how this plays out in practice.

Even if parts of the AI Act are amended, enforcement has already begun. National authorities, courts, regulators, customers, and counterparties will continue to expect responsible AI governance. Weakening the law on paper does not remove reputational risk, contractual obligations, or liability exposure.

And importantly, regulatory uncertainty almost always benefits large incumbents with lobbying power and legal resources, not smaller or mid-sized organisations trying to do the right thing.

What this means for businesses now

The biggest risk for businesses is waiting.

  • Waiting for your country to appoint a regulator
  • Waiting for perfect guidance
  • Waiting to see how the Omnibus lands
  • Waiting until August 2026 feels closer

That approach worked poorly under GDPR. It is likely to be worse under the AI Act. The companies best positioned right now are not those with all the answers, but those asking the right questions:

  • Where are we using AI across the business, formally and informally?
  • Which systems could fall into high-risk categories?
  • Who owns AI governance internally?
  • How are decisions documented, challenged, and reviewed?
  • What evidence could we produce tomorrow if asked?

AI compliance is not just a legal exercise. It touches procurement, IT, product design, HR, marketing, risk, and senior leadership. Like AML or data protection, it fails when it is treated as someone else’s problem.

The key takeaway

The AI Act is here. Enforcement has started. Deadlines are stacking up. And the rules may yet change but not in ways that remove accountability.

Finland has shown that “we’re still preparing” is no longer an acceptable answer. Spain has shown that early action creates business certainty. The Omnibus has shown that political pressure will not make this simpler.

For businesses operating in or into the EU, the question is no longer whether to prepare but whether to prepare proactively, or under pressure. August 2026 is coming faster than most compliance teams think.

When Data Thinks is a guide that explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Get it here.