A UK law firm has been ordered to pay wasted costs after submitting an application that cited two fictitious, AI-generated cases: the latest in a growing line of courtroom embarrassments caused by unverified use of artificial intelligence.
What happened?
According to barrister Alexander Bradford of St Philips Chambers, the unnamed firm represented a former student in a claim against Birmingham City University for breach of contract, negligence and fraud.
On 10 July 2025, the firm submitted an application containing two fabricated authorities. When the university’s solicitors, JG Poole & Co, were unable to locate the citations, they requested copies. Instead of responding, the firm withdrew the application, refiled it without the fictitious cases, and told the court the earlier version had been submitted “in error.”
The claim was struck out with indemnity costs on 30 July 2025. In later statements, the firm admitted the fake cases had been generated by an AI research feature embedded in their legal software which was used by an administrative staff member who failed to verify the sources and even signed the statement of truth under the solicitor’s name without authorisation.
The judge’s ruling
His Honour Judge Charman found that the solicitor’s and firm’s conduct was improper, unreasonable and negligent, applying the Divisional Court’s June guidance from Ayinde. He held that their explanation was inadequate and that the threshold for a wasted costs order had been met.
Bradford commented that the decision is “a further reminder of the importance of the risks that large language models pose to the administration of justice.”
A pattern of AI misuse in the courts
This ruling follows a series of recent incidents where legal professionals have been caught relying on AI hallucinations, i.e., fabricated cases or quotes presented as genuine authorities. In July, more than 50 fake cases were cited across UK submissions, prompting widespread concern about AI’s reliability in legal practice.
In one example, a barrister who used ChatGPT to draft tribunal grounds was referred to the Bar Standards Board after failing to admit his AI-generated citation was false.
These cases build on earlier warnings that AI tools can “make it up” with alarming confidence — a phenomenon seen when ChatGPT fabricated entire judgments and court references for solicitors who didn’t verify their outputs.
Even when used in good faith, lawyers risk breaching their professional duties if they fail to check AI output or disclose its use.
What law firms should do now
The latest ruling reinforces the need for strict verification, supervision and transparency when AI is used in legal work. Firms should:
- Verify every authority manually: never rely solely on AI-generated citations.
- Restrict AI features within document or research software until quality controls are in place.
- Implement disclosure policies requiring lawyers and staff to confirm when AI tools have been used.
- Deliver targeted AI training for support staff, ensuring they understand professional duties and signature protocols.
The message from the courts is unambiguous: AI errors are not excusable. The convenience of automation does not absolve legal professionals of their duty to verify facts, or the consequences when they fail to do so.