When AI hallucinates and lawyers pay: The $86K legal wake-up call

The era in which AI-hallucinated case citations could be dismissed as a novelty is clearly over. 

 

In August 2025, the Southern District of Florida imposed nearly $86K in sanctions against plaintiffs’ counsel in the case, ByoPlanet International, LLC v. Johansson and Gilstrap. It is now the largest sanction to date for filing hallucinated AI-generated legal authority, and a watershed moment for the profession.

 

This was not dismissed as an error, a misunderstanding of new technology. The court cited repeated, systemic and bad-faith misuse of generative AI, despite multiple warnings, motions to dismiss and explicit notice that citations were false. The result involves dismissed cases, fee-shifting sanctions and, most significantly, a judicial opinion that will be cited for years. It’s time to acknowledge that using AI now carries real litigation risk.

 

What happened? From helpful tool to sanctions

 

The sanctioned attorney admitted to using ChatGPT and other AI tools to draft complaints, motions, and appellate briefs across at least eight related cases. Over months, filings included non-existent cases, fabricated quotations attributed to real cases, false parentheticals and misstatements of holdings and repeated errors after explicit notice from opposing counsel and the court.

 

Critically, the lawyer did not verify citations, relying instead on a paralegal and assuming AI outputs were accurate. Even after motions explicitly pointing out fake authorities, the conduct continued. The court was unequivocal:

 

“A reasonable attorney does not blindly rely on AI to generate filings… What happened here constitutes repeated, abusive, bad-faith conduct that cannot be recognized as legitimate legal practice and must be deterred.”

 

Sanctions were imposed and included full reimbursement of opposing counsel’s fees for time spent untangling AI-generated fiction.

 

Why this case matters

 

There are a number of reasons why this case matters more than other AI hallucination cases.

 

The dollar amount changes the risk calculus. Earlier AI-related sanctions typically ranged from $1,500 to $15K. This case blows through that ceiling. At nearly $86K, the sanction is large enough to trigger insurance scrutiny, raise internal disciplinary issues, create partner-level exposure, invite malpractice claims and, importantly, damage firm-wide reputation. This makes it no longer just a training issue. It is actually a balance-sheet issue.

 

Courts are losing patience. Judges across the US are openly complaining that hallucinated citations waste judicial resources and distract from the merits of cases. With federal courts already understaffed and backlogged, AI errors are being treated as abuses of the judicial process.

 

We didn’t know AI hallucinates is no longer credible. In 2023, ignorance might have been plausible. In 2025, it is not. An estimated 712 judicial decisions worldwide now address AI hallucinations, 90% of them issued this year alone. Lawyers are now expected to understand AI’s limitations and to supervise its use accordingly. Failure to do so is increasingly framed as bad faith, not negligence.

 

Fee-shifting is the new enforcement mechanism. The most dangerous development for firms is procedural, not technological. Opposing counsel now know to ask for fees. Once courts accept that time spent responding to AI-tainted filings is compensable, sanctions scale rapidly. That is exactly how the ByoPlanet figure reached $86K, and why even higher numbers are likely coming.

 

The professional duty has not changed but the consequences have. Courts have been clear that using AI is not prohibited. What is prohibited is abdicating professional judgment.

The duty remains exactly what it has always been to verify every citation, read the cases you cite, ensure quotations are accurate, supervise staff and tools and conduct a reasonable inquiry before filing.

 

AI does not dilute ethical obligations. It magnifies the cost of ignoring them.

 

What law firms need to do now

 

Treat AI use as a regulated activity

 

Firms should have clear, written rules on where ai may be used, what must always be independently verified and who is accountable for review and sign-off. Everyone uses it is not a policy.

 

Mandate citation verification

 

If a case or quotation appears in a filing, someone must pull the actual decision from a trusted legal database, confirm the holding, confirm the quote and confirm relevance. If that feels inefficient, the alternative now costs six figures.

 

Train lawyers and staff on AI failure modes

 

Hallucinations are not edge cases. They are a known feature of generative AI. Firms must ensure lawyers understand why hallucinations occur, when they are most likely and why AI output cannot be treated as research. Courts now expect this literacy.

 

Update risk, insurance, and supervision frameworks

 

This is no longer just a tech issue. It intersects with professional negligence, supervision obligations, client disclosure, regulatory expectations and insurer reporting thresholds. Firms that ignore this do so at their own peril.

 

The ByoPlanet sanctions mark a turning point. AI hallucinations are no longer amusing anecdotes or early-adopter mishaps. They are now sanctionable misconduct with serious financial consequences. AI can be a powerful tool for lawyers but only when paired with checking the sources. 

 

AI can transform how work gets done but law firms need to understand the opportunities and risks inherent in this technology. Our innovative AI compliance courses provide training that will ensure your firm stays ahead of the curve. Try it now.