Book an intro

Is the UK’s AI law gap a relief, or a risk?

There’s an assumption that seems to be taking hold across many UK organisations that the absence of a formal AI Act reduces the immediate need to act.

It’s an understandable assumption. With the AI Act, the EU introduced the world’s first comprehensive AI law. It involves strict obligations, clearly defined risk categories, and eye-watering penalties. The UK, by contrast, appears to be taking its time publishing guidance, consulting stakeholders, and signalling future legislation that has yet to arrives.

That contrast feels like breathing room but it could be something else. As Matthew Norris notes, treating this gap as a reason to wait is not just misguided, it is “one of the more consequential strategic mistakes a UK organisation deploying AI can currently make.”

No regulation?

To understand the UK’s position, you have to look at what it already is doing. There may be no single UK AI Act, but there is regulation. Instead of building a new framework from scratch, the UK has doubled down on a sector-led system, where existing regulators apply existing laws to AI. What that means is that AI is governed. It’s just more complicated. 

If your organisation is deploying AI that touches personal data, then UK GDPR is in play. That brings a web of obligations such as lawful basis, transparency, fairness, data minimisation, and impact assessments. The Data (Use and Access) Act 2025 has shifted the rules on automated decision-making, making it easier to deploy such systems but only if real safeguards are in place. Human intervention and clear disclosure are conditions of use.

And the regulators are not standing still. The ICO has made it clear that AI, particularly agentic systems acting on behalf of users, is firmly in its sights. Its recent work highlights risks that many organisations are starting to experience, such as blurred lines of accountability across AI supply chains, theopaque inference of sensitive personal data, and new forms of security exposure created by autonomous systems.

As Norris observes, “the ICO is not waiting for Parliament… UK GDPR already requires… human oversight of significant automated decisions.” The idea that enforcement will only begin once a new law is passed is wrong.

The EU AI Act is the regulation you already have

At the same time, the UK’s apparent regulatory calm is complicated by a much louder reality across the Channel. The EU AI Act is no longer theoretical and its most significant obligations, especially for high-risk systems, will be in force August 2026. It classifies AI by risk, imposes obligations accordingly, and enforces compliance with meaningful penalties. Systems used in areas like hiring, credit scoring, healthcare, and education face the strictest scrutiny, including requirements for risk management, human oversight, technical documentation, and continuous monitoring.

And the Act does not stop at EU borders. A UK company does not need to be based in Paris or Berlin to fall within its scope. If its AI systems affect individuals in the EU, customers, employees, or users, then the regulation applies. A fintech firm offering services into Europe, a SaaS platform with EU users, or a healthcare provider using AI in diagnostics for EU patients will all find themselves subject to the same rules.

This is where the illusion of regulatory distance collapses. The UK may not have an AI Act, but many UK organisations are already living under one.

As Norris puts it, “the absence of UK legislation does not create an exemption from EU legislation.”

UK organisations are operating in a regulatory environment where multiple regulators, frameworks, and expectations overlap without fully aligning. A single AI system might engage the ICO on data protection, the FCA on consumer fairness, Ofcom on online safety, and EU authorities under the AI Act, all at once. There is no single checklist that resolves this. 

That’s a complexity that creates a new kind of regulatory and strategic risk.

The cost of waiting

Some organisations are trying to take advantage of the UK’s lighter touch. They are moving faster, experimenting more freely, and postponing governance decisions, assuming they can retrofit compliance later.

But governance is not something that bolts on easily after the fact. Systems that were not designed to be explainable are difficult to explain. Systems without built-in oversight are hard to control. Data practices that were never properly scoped are expensive and sometimes impossible to unwind.

As Norris warns, “organisations that defer governance now will face a harder retrofit later.” And by the time that retrofit becomes unavoidable, whether due to EU obligations, UK enforcement, or commercial pressure, the operational debt can be significant.

From compliance to accountability

The deeper issue, though, is trust.

AI systems are increasingly making or informing decisions that matter about people’s finances, opportunities, health, and access to services. When those systems fail, the question is never just whether a rule was broken. It is whether the organisation understood what its own technology was doing.

That is the standard regulators are moving toward, in the EU and in the UK. It’s all about accountability. These are things like being able to explain how your system reached a decision, being able to demonstrate who was responsible for its behaviour, showing what data the system used, whether it should have used it and, significantly, how you can intervene when something goes wrong. 

These are requirements already embedded in UK GDPR, already formalised in the EU AI Act, and increasingly demanded by customers and partners.

A legal and strategic choice

The UK’s approach to AI regulation is often described as flexible, even pragmatic. It avoids the rigidity of a single, top-down statute and allows regulators to adapt within their domains. That flexibility may prove to be a competitive advantage. But now it comes with a cost.  

Without a single framework to point to, the burden is on organisations to interpret and justify their own approach. The question is no longer whether you have complied with a specific law, but whether you can defend your system in a landscape where multiple laws, regulators, and expectations converge.

In the end, the absence of a UK AI Act is not a gap in regulation. It is a test of organisational maturity. The companies that recognise this are building governance into their systems from the outset because they understand that regulation, in one form or another, is inevitable.

The others are waiting. But that is a risk. Because when the scrutiny does come, from a regulator, a partner, or a customer, the questions will be what did your AI system do? and how do you know?

As Norris notes, the real measure is whether you can demonstrate “what the agent did, under whose authority, with what access, and how quickly you were able to respond.”

And it will be very difficult to improvise that answer when it is needed.

AI entered a new regulatory era in 2026. The EU is progressing the Digital Omnibus package, the EU AI Act is moving into its implementation phase, and regulators worldwide are issuing new rules on AI. In this webinar, we took a deeper look at how organisations can build a safe and compliant AI framework. We explored the next steps under the EU AI Act, the UK’s DUAA, and the most important AI investigations and fines from the past year. Watch it here.