When AI gets it wrong: Why law firms need to manage risk and liability

AI is increasingly being used in legal research and drafting but a recent judgment underscores the need for careful oversight and, significantly, the potential liability risks for law firms. The case, Ndaryiyumvire v Brimingham City University, demonstrates how AI-generated content can intersect with professional responsibilities and regulatory exposure.

 

The case 

 

In July 2025, solicitor Raphael Newton filed an amended claim that inadvertently cited two non-existent cases. These were generated by the research function of the firm’s legal software and attached to a draft prepared by administrative staff. While the errors were quickly identified and corrected, the incident led to a wasted costs order against the firm, Gordon & Thompson.

 

His Honour Judge Charman, significantly, decided against making a specific referral to the SRA for Newton personally, noting that the failures were largely administrative and managerial rather than misconduct by the individual solicitor. The judge noted that while submitting false authorities is a serious matter, in this instance the facts, including the prompt withdrawal of the document and internal corrective measures, meant that additional sanctions were disproportionate.

 

The key takeaways for law firms

 

AI is a tool, not a substitute for oversight

The case demonstrates that AI can generate content that appears authoritative but could be inaccurate. Law firms must implement robust verification processes, ensuring that every citation or legal argument generated by AI is checked by a qualified solicitor before filing. Relying solely on AI outputs exposes firms to wasted costs orders and reputational damage.

 

Administrative oversight is crucial

The errors were traced to administrative staff filing a draft document with AI-generated citations. Firms must strengthen document control, labelling, and approval workflows to prevent accidental submission of incomplete or inaccurate materials. Mandatory internal review procedures and staff training are essential safeguards.

 

Management responsibility can drive liability

Judge Charman emphasised that the incident was a failure in management. Law firms are expected to maintain clear governance structures, assigning accountability for compliance with practice rules, even when errors originate outside the solicitors themselves. This aligns with broader regulatory expectations around judgement-based compliance and risk management.

 

Mitigate risk with policies and training

In response, Gordon & Thompson implemented several protective measures:

  • Verification of all citations by solicitors before filing

  • Clear labelling and control of draft documents

  • Enhanced staff training on document control and signature protocols

 

For most law firms, proactive adoption of similar policies will reduce the likelihood of regulatory action and professional liability.

 

Implications for the future of AI in law

  • Due diligence on AI outputs is mandatory. AI-generated research should never bypass human verification.

  • Policies must evolve with technology. Firms should update risk frameworks, staff responsibilities, and governance structures to account for AI-assisted drafting.

  • Liability flows up to management. Even when errors are caused by junior staff or AI, senior management may bear responsibility for oversight failures.

  • Regulators are watching. As AI becomes embedded in legal workflows, expect increased attention from the SRA on how firms integrate technology into professional practice.

This case is an important reminder that AI can be a powerful assistant in legal practice but it comes with professional responsibility. Law firms must balance innovation with diligence by verifying outputs, strengthening document controls and embedding AI into governance and risk frameworks.

 

For firms that want to embrace AI, it’s clear that it cannot come at the expense of oversight or professional accountability. The courts and regulators are increasingly capable of scrutinizing AI-assisted work and robust internal controls are becoming as critical as legal expertise.

 

AI can transform how work gets done but companies and firms need to understand the opportunities and risks inherent in this emerging technology. Our innovative AI compliance courses provide training that will ensure you stay ahead of the curve, avoid compliance fines and safely evade reputational damage. Try it here.