Fake cases, real consequences: The AI crisis facing UK law firms

The legal profession in England and Wales has entered uncharted territory. In a stark warning delivered by the High Court in June 2025, senior judges condemned the misuse of artificial intelligence tools by solicitors and barristers who submitted fake legal authorities in court. These weren’t obscure technicalities, but wholly fictitious case citations that made their way into legal arguments, judicial review applications, and even multimillion-pound commercial litigation.

 

For the legal sector, the message is clear: AI is not a shortcut. It is a powerful tool that, without proper understanding and oversight, can expose law firms to regulatory action, reputational damage, and court sanctions.

 

What happened to prompt the rebuke?

Two recent cases triggered the High Court’s intervention. In Ayinde v London Borough of Haringey, a pupil barrister representing a homeless client submitted at least five entirely fake authorities in a claim for judicial review. She claimed the cases were the result of general online searches and denied knowingly using AI, though the court found her explanations lacking credibility. The court concluded that either she had used generative AI and lied about it or deliberately fabricated citations. Both scenarios met the threshold for contempt of court.

 

In Al-Haroun v Qatar National Bank, the situation was arguably worse. Eighteen out of forty-five cited legal authorities in a witness statement turned out to be fictitious. Some that existed were misquoted or cited for propositions they did not support. In a particularly ironic twist, one invented authority was falsely attributed to the very judge presiding over the matter.

 

The judge made it clear that providing false material as if it were genuine could be considered contempt of court or, in the “most egregious cases,” perverting the course of justice, which carries a maximum sentence of life in prison.

 

Both solicitors and barristers involved have now been referred to their respective regulators. the Solicitors Regulation Authority (SRA) and the Bar Standards Board (BSB). 

 

How did this happen?

The root cause has been the explosion of generative AI tools, such as ChatGPT, being used without proper validation. Unlike legal databases, these models do not retrieve verifiable case law. They generate plausible-sounding text based on probability. As the court warned, they “may cite sources that do not exist… [and] purport to quote passages from a genuine source that do not appear in that source.”

 

This phenomenon, known as AI hallucination, is not new. But it is now leading to real-world consequences in UK courts, including wasted costs orders, regulatory referrals, and in the most extreme cases, possible contempt of court or even criminal charges.

 

Law firms are now on notice

The High Court issued an unambiguous call to action: heads of chambers and managing partners must take “practical and effective measures” to ensure that every legal professional—regardless of seniority—understands their duties when using AI. This includes clerks, paralegals, trainees, and partners.

 

Relying on internal good intentions or assuming junior staff know the limitations of AI is no longer acceptable. Everyone in the profession must be trained to understand the risks. The court went so far as to say that in future hearings, it may inquire directly whether leadership responsibilities for AI oversight have been fulfilled.

 

The compliance risks of AI

Firms that fail to act face severe consequences:

 

Wasted Costs Orders: Lawyers who submit AI-generated false material risk paying the opposing party’s legal costs.

 

Regulatory Referrals: The court has begun directly referring solicitors and barristers to the SRA and BSB.

 

Contempt of Court: Placing fake authorities before the court knowingly or being reckless about their truth may lead to contempt proceedings.

 

Reputational Damage: In both reported cases, junior lawyers had their actions detailed in public judgments, permanently tying their names to professional misconduct.

 

Criminal Exposure: In rare but serious cases, using fake evidence to interfere with justice may amount to perverting the course of justice, a crime carrying a maximum sentence of life imprisonment.

 

 

Training AI is not enough: Train your staff first

The fundamental issue is not the AI but the humans using it. The court made clear that even unintentional misuse, if it results from incompetence or lack of oversight, will not be excused.

 

Every law firm must now ensure:

 

  • Staff know what generative AI is and what it isn’t (i.e. it is not a legal research tool).
  • All outputs from AI tools are independently verified against official sources.
  • Clear policies exist for when and how AI may be used.
  • Junior lawyers, especially pupils and trainees, receive formal training on the risks of AI hallucinations and on using AI more generally.
  • Supervisors understand their accountability if they fail to detect misuse by junior staff.

     

 

A warning to the legal profession

This moment may come to be seen as a tipping point in legal ethics and practice. AI will continue to play a role in legal work, but only with the right safeguards in place. Law firms must now ask: do we know how our staff are using AI? If not, it’s time to find out, before the courts do.

 

Try VinciWorks AI training for your law firm today