AI in the dock: Who is liable when machines get it wrong?

Artificial intelligence has become the quiet engine behind modern business. It writes copy, screens candidates, approves loans, forecasts demand and even decides which customer gets a discount. But what happens when it gets it wrong? Who should hold legal liability for AI systems?

 

When AI systems misfire, for example a lying chatbot, a discriminatory model or a mistaken award, someone is left to pick up the pieces. But who? 

 

That’s the question regulators, lawyers, and compliance officers are beginning to grapple with as AI moves from novelty to infrastructure. Liability, long the bedrock of business risk, is being re-written by machines that make decisions no one fully understands.

 

The invisible chain behind every AI decision

Most companies using AI didn’t build it. They rent, integrate, or license it.

Behind every “AI solution” is a supply chain of actors; data suppliers, model developers, integrators, consultants, platform providers, and finally the business deploying it. Each layer adds value and complexity, but also diffuses responsibility.

 

Picture a retailer using an AI-powered pricing tool. The model was trained by one company, fine-tuned by another, and plugged into the retailer’s sales platform by a consultant. When that system begins underpricing certain regions and breaching competition rules, who’s accountable? The vendor who wrote the code? The consultant who integrated it? Or the retailer who relied on it?

 

That’s the problem at the heart of AI liability. The more distributed the technology, the more tangled the blame.

 

A glimpse of what’s already happening

 

AI hallucinations—fabricated cases that appear convincingly real—are no longer a fringe issue in legal practice. In July 2025 alone, over 50 cases involving fake legal citations generated by AI tools were publicly reported across multiple jurisdictions. This figure doesn’t even account for private reports still under review, suggesting the true scale of the problem is much larger.

 

The legal profession around the world has entered uncharted territory. In a stark warning delivered by the High Court in June 2025, senior judges condemned the misuse of artificial intelligence tools by solicitors and barristers who submitted fake legal authorities in court. These weren’t obscure technicalities, but wholly fictitious case citations that made their way into legal arguments, judicial review applications, and even multimillion-pound commercial litigation.

 

These are early warnings of a deeper truth: the same opacity that makes AI powerful also makes it dangerous. You can’t easily audit a black box, and you can’t easily explain its errors to a regulator.

 

The compliance nightmare no one wants to own

For compliance teams, the nightmare scenario isn’t a malicious AI. It’s a well-intentioned one that quietly does harm while everyone assumes it’s working perfectly.

 

AI isn’t static. It learns, drifts, and updates. A model that performed flawlessly last quarter may behave unpredictably today because the world changed, the data shifted, or the vendor silently retrained it. Meanwhile, the organisation deploying it remains legally responsible for every automated decision that affects customers, clients, or employees.

 

That reality is reshaping corporate risk management. A failure to monitor AI outputs, validate models, or question results is quickly becoming a form of negligence.

 

Contracts, caveats and the illusion of protection

Traditionally, liability starts with the contract. Who promised what? What did the terms say?

 

But in the AI world, contracts can be deceptive comfort blankets. Many providers sell systems under “use at your own risk” terms, disclaiming any responsibility for accuracy. These clauses often stand up in court, especially when the product is off-the-shelf.

 

If, however, the system is customised — fine-tuned for a client’s specific use — the liability calculus changes. A vendor that promises tailored performance or guarantees compliance can’t easily hide behind disclaimers when the tool fails. The more bespoke the AI, the harder it is for its maker to walk away.

 

That’s why compliance teams must read AI contracts like forensic accountants: What does “accuracy” mean? What happens if data is biased? Who pays when the model drifts? Does the vendor have insurance, or is the warranty meaningless because the company couldn’t pay a claim anyway?

 

When everyone’s responsible, no one is

The deeper challenge is causation. AI decisions rarely happen in isolation.

 

A faulty output might trace back to bad training data from an external dataset, an integration error by a third-party consultant, or a deployment mistake by the client. Each actor will insist the problem lies elsewhere. Regulators and judges are left trying to apportion blame across a network of invisible hands.

 

Some policymakers argue that the only way to make the system fair is to impose strict liability — to hold anyone deploying “high-risk AI” automatically responsible, regardless of fault. Others prefer to spread responsibility along the value chain, forcing each actor to prove they met minimum safety and oversight standards.

 

The coming wave of AI liability laws?

For a while, Europe looked set to lead the world in AI accountability. The AI Liability Directive, first proposed in 2022, was meant to close the gap between traditional civil liability law and the new reality of algorithmic harm. It would have made it easier for victims of AI-related damage to sue by shifting the burden of proof: companies would have had to show they took reasonable steps to prevent harm, not the other way around.

 

But in July 2025, the European Commission quietly withdrew the proposal.

 

The Directive was ambitious. It promised a rebuttable presumption of causality allowing victims to argue that an AI system plausibly caused harm without having to prove the complex technical chain of events. It would also have empowered national courts to order evidence disclosure from developers of high-risk AI systems, and harmonised the patchwork of national rules across the EU. In essence, it was designed to work hand-in-hand with the EU AI Act, ensuring that people harmed by AI enjoyed the same protection as those harmed by other technologies.

 

So why the U-turn?

 

Officially, the Commission cited the need for more reflection and alignment with other digital legislation. Behind the scenes, industry groups had raised concerns about innovation-chilling effects, legal uncertainty, and overlap with existing frameworks. The message was clear: Europe isn’t abandoning accountability — it’s buying time to get it right.

 

The withdrawal doesn’t affect the AI Act, which came into force on 1 August 2024 and continues to shape how AI is developed and deployed across the bloc. But for now, the question of who pays when AI fails in Europe remains unresolved — and that legal vacuum leaves compliance professionals navigating uncharted ground. 

 

Civil liability emerging case by case

While Europe pauses to rethink regulation, the United States is letting its courts do the heavy lifting. There’s no federal AI liability statute, no blanket law defining who’s responsible when algorithms cause harm. Instead, judges are adapting long-standing doctrines of negligence, fraud, and product liability to a new kind of defendant — one that doesn’t exist in human form.

 

In Mata v. Avianca (2023), two New York lawyers relied on ChatGPT to draft a legal brief. The AI confidently generated fake case law — and the lawyers, trusting the machine, submitted it to court. When the judge discovered the citations were hallucinated, he sanctioned both lawyers for misconduct. The fine itself was modest, but the precedent was profound: blind reliance on AI is no defence. Professional duties of care still apply, even when the mistake originates from a model rather than a person.

 

Product liability law is also evolving. In Doe v. GitHub (2023), developers alleged that AI coding assistants trained on public repositories reproduced copyrighted material without consent. The case, still winding its way through US courts, tests whether AI outputs can amount to defective products or infringe intellectual property rights — and who in the development chain bears the blame if they do.

 

Together, these cases show a pattern emerging: US courts are not waiting for Congress. Instead, they are applying familiar tort principles to unfamiliar facts; negligence, misrepresentation, duty of care, and stretching them to fit a world of automated decision-making. Where AI tools are marketed as reliable, the courts are treating those claims as warranties. Where companies fail to supervise or test algorithms, judges are calling it negligence.

 

The anatomy of preventable failure

Most AI disasters can be traced back to a handful of preventable flaws:

 

  • Over-promising vendors whose marketing outruns their engineering.
  • Deployers who fail to test or validate before rollout.
  • Blind trust in outputs without human oversight.
  • Data drift and bias that go undetected because no one’s watching.
  • Contracts that shift blame but not control.

 

When a regulator investigates, they’ll ask a simple question: did the organisation take reasonable steps to prevent foreseeable harm? For most companies, that answer lies not in the algorithm, but in the governance.

 

The compliance roadmap: keeping your AI chain compliant

 

The good news is that liability is manageable. AI doesn’t have to be a compliance trap — if you treat it like any other high-risk system.

 

Map your AI supply chain — Know who builds, trains, integrates, and maintains every system you use.

 

Vet your vendors — Demand to see their bias testing, update policy, and insurance coverage.

 

Negotiate contracts with precision — Define performance standards, escalation paths, and liability limits in measurable terms.

 

Test, monitor, and document — Keep audit trails, version histories, and validation records.

 

Train your people — AI literacy is now part of compliance competence.

 

Prepare for failure — Incident response for algorithmic error should sit alongside data breaches in your crisis plan.

 

These aren’t nice-to-haves; they’re the first line of defence when the regulator asks, “What controls did you have in place?”

 

Train your team now on AI compliance