There’s no doubt that AI tools like ChatGPT are changing how everyone works. But some cautionary tales from inside courtrooms on both sides of the Atlantic should make legal and other professionals pause before relying on AI-generated outputs without due diligence.
These cases and our recent webinar on AI practices make it clear: If you blindly trust AI, you could end up causing real damage.
Fake cases in real court submissions
In a recent judicial review case, High Court Justice Andrew Ritchie was surprised to find that five non-existent case citations had been submitted to the court. The fabricated citations appeared in written submissions prepared by barrister Sarah Forey instructed by Haringey Law Centre.
When the defendant’s team pointed out that the cases didn’t exist and asked for the original judgments, they received nothing. Even more shockingly, the instructing solicitor Sunnelah Hussain brushed off the citations as mere “cosmetic errors” that didn’t require explanation. Justice Ritchie slammed that excuse as “remarkable” and “grossly unprofessional.”
Forey initially claimed the errors were due to mistakenly including the wrong documents from her personal archive of case reports. Later, under judicial scrutiny, she admitted the problem was more serious. Although the court couldn’t confirm whether AI had generated the fake cases, it was mentioned as a likely possibility.
Justice Ritchie expressed his frustration: “Ms. Forey put a completely fake case in her submissions. That much was admitted. It is such a professional shame. The submission was a good one. The medical evidence was strong. The ground was potentially good. Why put a fake case in?”
In the end, while the underlying case was successful, the claimant’s awarded costs were reduced by £7,000 due to the misconduct. Moreover, the judgment has been referred to both the Bar Standards Board and the Solicitors Regulation Authority.
30 years of experience but zero verification
In New York, attorney Steven Schwartz, a partner at Levidow, Levidow & Oberman with more than 30 years of legal experience, made headlines for a similar blunder. In preparing a personal injury brief, Schwartz used ChatGPT to research relevant case law. The tool responded with convincing citations to what appeared to be precedents from federal courts.
The problem? None of them existed.
When the brief reached the judge, the errors were quickly flagged. In court, Schwartz admitted he hadn’t verified any of the citations. He had no idea ChatGPT was capable of fabricating information and took the tool’s output at face value.
This wasn’t just a minor oversight. It became a public scandal. The court fined Schwartz and his firm $5,000, and the incident was widely covered as a cautionary tale of blind trust in AI.
The bottom line? Verify
AI is not a source of truth. Tools like ChatGPT are trained to generate plausible-sounding text based on patterns in data, not to retrieve verified, citable facts. While they can be powerful assistants, they don’t check their work and they definitely don’t know what’s real and what’s not.
When lawyers insert unverified AI-generated content into legal documents, they risk not only losing credibility but also facing professional sanctions. And the same applies across all professional sectors.
Can you protect yourself and your work when using AI tools? Yes.
- Always verify information. Whether your industry is law, healthcare, education, finance, or journalism, never assume that AI-generated information is accurate. Cross-check any data, statistics, quotes, or sources using trusted, field-specific platforms or databases. If you didn’t independently verify it, don’t rely on it.
- AI should be used for support, not substance. AI can help generate ideas, summarize content, or organize your thoughts but it should never replace your professional judgment or domain expertise. Remember, it is a springboard, not a final answer.
- Make sure to educate your team. It’s critical to ensure that everyone in your company, from interns to the C suite, understands the limitations of AI. Training should emphasise responsible use, fact-checking and ethical considerations.
- Be transparent. In some sectors, transparency around AI use is becoming a best practice or even a requirement. If your output includes AI assistance, be clear about what was generated by AI and what was created or verified by a human.
- AI is here to stay. It will continue to shape legal practice in exciting and sometimes unsettling ways. But as these two cases show, the ultimate responsibility lies with the human lawyer, not the algorithm.
AI tools offer incredible efficiencies. But these high-profile cases demonstrate the existential risk of treating ChatGPT or similar tools as infallible. While AI can be a powerful tool, it is not a substitute for human expertise, ethics or accountability.
Our recent webinar, AI compliance and ethical practices – Ensuring the responsible use of AI in your organisation, explored how AI regulations, tools and ethical considerations are shaping the way businesses and law firms operate and how you can manage the risks and reap the rewards. It’s available for download.