The increasing legal liability of AI hallucinations: Why UK law firms face rising regulatory and litigation risk

AI is now embedded in everyday legal practice from drafting emails to generating contracts to structuring arguments. But the UK’s late-2025 case law reveals that AI hallucinations are no longer isolated mishaps. They are a growing source of judicial sanctions, reputational damage and potential criminal liability for law firms.

 

A growing concern

 

In November 2025 alone, the UK recorded new cases of AI-generated false citations, including the 20th UK hallucination case in an employment tribunal, which involved fabricated legal citations and a government update adding further hallucinations and non-AI citation errors across government reports and legislative documents.

 

 

By the end of November, the UK total had risen to 24 recorded incidents, part of an international trend now surpassing six hundred cases globally. As the University of London and the Government AI Hallucination Tracker highlight, the problem is escalating across all sectors, not just the courts. But in the legal world, the consequences are uniquely severe: costs orders, regulatory referrals, judicial criticism, and, increasingly, warnings of criminal prosecution.

 

What recent cases reveal about the risks

 

Recent case law in 2025 shows an unmistakable trend that courts and tribunals are now actively detecting and calling out the misuse of generative AI in legal documents. In the case  Choksi v IPS Law LLP, a managing partner’s witness statement was found to contain fabricated cases, invented authorities and misleading “precedents,” with the judge noting clear signs of AI involvement. A paralegal later confirmed they had relied on Google’s AI tool, and the firm was criticised for having no real verification system in place.

 

The problem wasn’t isolated. In the Napier House appeal, the tribunal confronted entire grounds of appeal built on case citations that didn’t exist. Again, the tribunal concluded that AI-generated, unchecked material had wasted significant judicial time and compromised the credibility of the application.

 

Unrepresented parties have also stumbled into the same trap. In Holloway v Beckles, litigants submitted three non-existent cases produced by consumer AI tools. The tribunal labelled the behaviour “serious” and issued a costs order, making it clear that even lay users are accountable for AI-fabricated authorities.

 

In another example, Oxford Hotel Investments v Great Yarmouth BC, AI didn’t invent cases but distorted them. The tribunal found that AI had misquoted a key housing law authority to support an implausible argument about microwaves being “cooking facilities.” The judge described the incident as an illustration of the risks of using AI tools without any checks.

 

All of this has culminated in a broader warning from the High Court that submitting AI-generated false information could expose lawyers not only to professional sanctions but to potential criminal liability, including contempt of court or even perverting the course of justice. One case earlier in the year uncovered that 18 out of 45 citations in a witness statement were fabricated by AI, yet presented confidently.

 

The message from the judiciary is that AI is not an excuse. Verification is required and whether errors arise from negligence, over-reliance, or blind trust in a tool, the professional and legal consequences are real.

 

Why is it worse for law firms?

 

It’s becoming increasingly clear from the recent run of cases that regulatory expectations around AI use in legal work have shifted dramatically. Judges are no longer treating AI mistakes as understandable or even accidental. Instead, they now expect firms to have concrete safeguards in place such as verification steps, human review, clear internal policies on AI research tools and documented quality-control processes. Essentially, “we trusted the tool” is no longer a defence.

 

At the same time, a more troubling consequence is emerging. AI hallucinations are beginning to seep into the legal ecosystem itself. UK courts now routinely preserve false citations directly in their judgments. Unlike in the US or Australia, these fabricated cases end up in searchable public records and, inevitably, in the datasets powering search engines and future AI systems. The risk is circular as hallucinated cases can reappear as “authority,” tempting lawyers who assume a quick search result must be legitimate.

 

These developments underscore a crucial point about responsibility. Even when an error originates with a junior employee, as in Choksi, where a paralegal admitted relying on AI, the burden still falls on the firm’s leadership. Regulators and judges are increasingly viewing AI misuse as a systems failure rather than an individual mistake. If oversight is weak, accountability rises up to the partner level.

 

And the consequences extend far beyond the courtroom. Each AI-related misstep is now highly visible, logged in public trackers, noted in judgments and often picked up by the legal press. For firms, the reputational fallout can be severe, from damaged trust with clients to uncomfortable conversations with professional indemnity insurers to heightened scrutiny from regulators. In an environment where credibility is everything, even a single hallucinated citation can leave a lasting mark.

 

How can law firms be protected?

 

Implement a mandatory three-step verification protocol

Judges have criticised firms for lacking proof of verification. Law firms should: 

 

  • Check authenticity by verifying all citations using authoritative databases. 
  • Check accuracy by confirming the cited passage exists and supports the proposition. 
  • Check relevance by ensuring the authority is appropriate and up-to-date. 

 

Ban AI tools for legal research unless approved

 

Commercial AI chatbots such as ChatGPT or Google AI Overview, cannot yet perform reliable legal research. Firms should approve specific research tools and prohibit general-purpose AI for citations.

 

Introduce an AI use register

 

Document when AI tools are used in drafting or research. This allows transparency if queries arise later.

 

Provide mandatory AI literacy training

 

Paralegals and junior lawyers are disproportionately likely to rely on AI without understanding limitations. Training should include:

  • hallucination risks
  • proper verification
  • ethical duties
  • examples from recent cases 

 

Update client engagement letters

 

Include disclaimers about AI use and quality controls to manage expectations.

 

The November 2025 cases and the global trend toward over 700 incidents indicate that AI hallucinations are now a real risk in legal practice. With judges actively testing AI tools, updating trackers and issuing sanctions, law firms can no longer rely on informal quality checks or good faith assertions.

 

The legal sector stands at a turning point. Those who adapt will protect their clients and their practice. Those who do not may find themselves facing judicial criticism, regulatory intervention, or even criminal exposure.

 

AI can transform how work gets done but law firms need to understand the opportunities and risks inherent in this technology. Our innovative AI compliance courses provide training that will ensure your firm stays ahead of the curve. Try it now.