When AI lies: The court case that is raising alarms 

In a revealing moment during a February 2026 hearing before the Nebraska Supreme Court, Greg Lake, an Omaha attorney, attempted to explain why his appellate brief contained dozens of deeply flawed legal citations. Some referenced cases that did not exist. Others attributed fabricated quotations to real judges. Several cited genuine Nebraska decisions but completely misrepresented what those rulings actually said. What initially appeared to be a serious but perhaps careless filing error quickly became something far more consequential when one of the justices asked, “The elephant in the room is whether or not you used artificial intelligence. Did you?”

Lake said no. That answer would ultimately become as important as the hallucinated citations themselves.

Days later, the attorney admitted that he had in fact used generative AI in preparing the filing and described his failure to be candid with the court as a “grave error of judgment.” Soon afterward, the Nebraska Supreme Court suspended him indefinitely from practicing law while disciplinary proceedings continued. What began as yet another story about AI hallucinations suddenly became a defining case about professional responsibility in the age of AI.

The underlying divorce case was ordinary. The proceedings had been moving through Nebraska courts for years and involved disputes over child custody and the division of marital assets. What transformed the case into a national cautionary tale was the appellate brief filed on behalf of the father.

According to the court’s findings, the filing contained 63 legal references, 57 of which were defective in some way. Twenty of them were described as outright hallucinations, fabricated legal authorities, invented quotations, and fictional legal reasoning.

These were not minor formatting issues or technical citation errors buried in footnotes. The court found itself confronting legal arguments built upon authority that had effectively been manufactured out of thin air.

The consequences were severe. The father whose appeal relied on the filing lost the case and was reportedly ordered to pay roughly $52,000 in opposing legal fees. The attorney is facing disciplinary proceedings.

Why AI hallucinations are so dangerous

What makes this case so striking is how familiar the underlying dynamic has become.

Generative AI systems are now deeply embedded in professional workflows across many sectors. They draft emails, summarize research, generate reports, produce code, and increasingly assist with analytical work that was once considered firmly within the domain of trained professionals. These systems are fast and remarkably convincing.

But that is what makes them dangerous.When software fails, it’s obvious. A spreadsheet breaks. A database crashes. A coding error triggers a malfunction. But when generative AI fails, it produces information that sounds authoritative regardless of whether it is true.

Hallucinations happen because these systems are built to generate likely-sounding language, not to check whether what they say is actually true. The result is output that can appear completely credible while being untrue.

In the Nebraska case, the brief reportedly looked professionally constructed at first glance. The citations appeared legitimate. The legal language sounded persuasive. But when anyone attempted to verify the authorities through standard legal research tools, the filing began to collapse. The court noted that even basic diligence would have exposed many of the fabricated citations almost immediately.

Not just a lawyer problem

Courts were among the first institutions forced to confront this issue publicly because legal work depends heavily on verifiable authority and adversarial scrutiny. Judges and opposing counsel are trained to test claims against established records. But the same underlying problem is rapidly emerging across every profession that relies on expertise, documentation, and trust.

A financial analyst relying on AI-generated research may cite market data that does not exist. A consultant may unknowingly include fabricated statistics in a client presentation. A medical professional could reference studies that were never published. A compliance officer may generate policy guidance based on regulations that are inaccurately summarized or entirely fictional. A software engineer might deploy AI-generated code that appears functional while containing hidden vulnerabilities or invented dependencies.

In each case, the danger is not simply that the AI makes mistakes. It is that the mistakes arrive wrapped in professional fluency.

The Nebraska Supreme Court seemed acutely aware of this broader significance when it warned that the obligations of “candor, competency, diligence, and making good faith arguments remain the same” regardless of whether AI tools are involved.

The court was not condemning AI. The justices actually acknowledged that generative AI can provide meaningful benefits when used with caution and humility. What the court rejected was the idea that responsibility could be outsourced.

That distinction matters because many organizations still misunderstand the nature of AI risk. There is an assumption that enterprise-grade tools or professionally branded AI platforms are inherently reliable in ways that consumer chatbots are not. Yet multiple recent cases involving AI hallucinations have reportedly involved expensive legal AI systems and enterprise products, not only public-facing tools.

The real lesson?

The question emerging from this and similar cases is not whether professionals should use AI. That debate is effectively over. AI systems are already woven into modern workflows, and their role will only expand. The real question is whether the human being attaching their name to the final product has genuinely verified what the machine produced.

It is easy to imagine how the Nebraska case happened. There was a deadline, it was a difficult case, and there was a tool that promised efficiency and created a draft that looked convincing enough to trust. The progression feels disturbingly ordinary because versions of it are now unfolding every day in offices across the world.

Most of those incidents will never reach a supreme court. Many will likely pass unnoticed. But some will not. And as AI-generated content becomes more deeply integrated into professional decision-making, institutions are beginning to draw firmer lines around accountability.

A warning for everyone

The Nebraska suspension may eventually be remembered as one of the first major professional disciplinary cases of the AI era, not because a machine hallucinated, but because a professional failed to verify the result and then lied about it.

That distinction is important because AI did not sign the filing or stand before the court or was obligated to be competent and honest to their client and to the justice system.

The lawyer, a human being, did. And that is an important lesson this case leaves with us.

On-demand webinar: How to build a compliant AI programme

Watch it here →