AI hallucinations—fabricated cases that appear convincingly real—are no longer a fringe issue in legal practice. In July 2025 alone, over 50 cases involving fake legal citations generated by AI tools were publicly reported across multiple jurisdictions. This figure doesn’t even account for private reports still under review, suggesting the true scale of the problem is much larger.
Despite early hopes that hallucinations would decline as legal professionals became more familiar with AI tools, the opposite has occurred. July has marked the highest volume of documented hallucination-related legal misconduct to date. And alarmingly, the incidents are not limited to junior staff or fringe firms. In some of the most striking cases, experienced attorneys at reputable law firms have been sanctioned for submitting entirely fabricated case law generated by AI.
What went wrong in Johnson v Dunn
A recent example is the US federal case Johnson v Dunn, where attorneys submitted two motions supported by fake legal authorities. The citations, generated by ChatGPT, were presented as genuine jurisprudence but didn’t exist. The attorneys acknowledged the error and accepted responsibility. But the court, issuing a 51-page sanctions order, was unequivocal: the misconduct was serious, and previous light-touch responses to similar incidents had clearly failed to deter recurrence.
The court imposed sanctions including public reprimand, disqualification from the case, and referral to licensing authorities. It also required the attorneys to notify all current clients and colleagues of the order; a humiliating but deliberate step to reinforce accountability.
This isn’t about incompetence. The lawyers in Johnson v Dunn were seasoned professionals in a high-performing law firm, with access to leading legal databases and internal compliance policies. They had been warned of the risks of AI. They knew better, and still, the hallucinations slipped through.
It’s a sobering reminder: even the best-resourced and most experienced professionals are vulnerable to the subtle deceptions of generative AI.
Why AI training and policies matter
Firms can no longer afford to treat AI as a plug-and-play productivity tool. Misuse of AI in legal research, drafting, or evidence gathering now poses a reputational and regulatory risk. The time has come for firms to embed clear expectations and safeguards into their everyday legal practice.
Here’s what law firms must do immediately:
Train every fee-earner and support staff on AI basics.
Staff must understand what AI tools can and cannot do. That means recognising hallucinations, understanding how LLMs work, and knowing when to verify information.
Make citation verification non-negotiable.
Any case, statute, or legal principle produced by an AI tool must be independently verified via trusted legal research platforms. AI output is not a shortcut for due diligence.
Mandate disclosure of AI use.
Firms should require staff to log when AI tools have been used in any legal output, especially for documents submitted to courts or regulators.
Publish and enforce an AI usage policy.
A firm-wide AI policy should define acceptable tools, set out clear rules for legal drafting, require source verification, and assign responsibility for oversight.
Foster a culture of professional scepticism.
AI output must be treated as unverified hearsay until proven otherwise. Encourage staff to view LLMs not as experts, but as capable interns: useful, but never authoritative.
What should an AI policy include?
A compliant and practical AI policy for law firms should cover:
- Permitted tools – whether staff may use ChatGPT, CoCounsel, Harvey, etc
- Approved use cases – internal brainstorming vs. client-facing work
- Verification protocols – all citations must be cross-checked in Westlaw, LexisNexis
- Disclosure rules – if AI was used in drafting a document, that must be noted
- Training requirements – all lawyers must complete an annual AI risk module
- Reporting obligations – suspected hallucinations must be escalated
- Sanctions for misuse – internal disciplinary measures, remediation plans
The stakes: More than just embarrassment
While some may view AI hallucinations as growing pains in the adoption of new tech, courts are making it clear: fabricating legal authority, even unintentionally, is professional misconduct.
The fundamental issue is not the AI but the humans using it. Courts have made clear that even unintentional misuse, if it results from incompetence or lack of oversight, will not be excused.
Sanctions will escalate. Reputations will suffer. Clients will walk. And eventually, regulatory bodies may impose minimum standards on AI use, making voluntary policies a safer bet than waiting for enforcement.
Firms must now move beyond informal safeguards and embrace a formal, documented approach to AI governance.
Because when the hallucination is exposed—and it will be—it won’t matter whether your lawyer was junior or senior, rushed or overconfident. It’ll matter what your firm did to prevent it.