Is privilege on the line? A US AI ruling UK lawyers can’t afford to ignore

AI is now firmly embedded in legal workflows. From first-draft submissions to regulatory analysis, generative AI tools are increasingly part of how legal work gets done.

But a recent US ruling is sending what could be a sharp warning that deploying the wrong AI tool or deploying it carelessly can actually put legal privilege at risk.

For UK lawyers and in-house teams, this decision does not change the law here. UK legal professional privilege remains governed by its own doctrines. However, the ruling is significant not because it binds UK courts, but because it signals how judges may begin to think about AI and confidentiality. And that should be of great interest to anyone advising on litigation risk, regulatory exposure or cross-border matters.

For UK lawyers and in-house teams, this decision does not change the law here. UK legal professional privilege continues to be governed by its own laws. But while the ruling has no binding force in the UK, it offers an early indication of how judges may begin to think about AI and confidentiality. That will likely be of great interest to UK lawyers, especially those  handling litigation, regulatory work or cross-border cases.

The first major AI privilege decision

In February 2026, the US District Court for the Southern District of New York issued what is widely regarded as the first federal decision addressing whether AI-generated material can be protected by privilege.

In United States v. Heppner, Judge Jed S. Rakoff held that documents created by a criminal defendant using a publicly available version of Anthropic’s Claude were not protected by either attorney-client privilege or the work product doctrine.

The facts were that Bradley Heppner, a CEO facing fraud charges, used a consumer AI tool to analyse his legal exposure and develop potential defence strategies. In doing so, he entered information he had received from his lawyers. He generated dozens of documents, then later shared them with his defence team. When the FBI seized his devices, those AI-generated materials were discovered. His lawyers claimed privilege. The court rejected it.

Judge Rakoff’s reasoning was that attorney-client privilege depends on confidentiality. By entering information into a third-party platform whose terms of service allowed data collection and potential disclosure, Heppner had voluntarily shared that information outside the privileged relationship. Claude was not his lawyer. There was no legal duty of loyalty. And privilege could not be created retroactively by forwarding the AI’s output to counsel after the fact.

The ruling did not condemn AI use. In fact, Judge Rakoff left open the possibility that attorney-directed use of secure, enterprise-grade AI tools might be treated differently. But consumer AI platforms are considered third parties, and disclosure to them may waive privilege.

A different answer in a different court

What makes this decision more interesting is that, on the same day, another federal court reached the opposite conclusion. In Warner v. Gilbarco, Inc., decided by the United States District Court for the Eastern District of Michigan, Magistrate Judge Anthony P. Patti considered whether a pro se litigant’s use of ChatGPT waived work product protection.

He said no. Judge Patti drew a distinction that attorney-client privilege and work product protection are not identical. Waiver of the former can occur upon voluntary disclosure to a third party. Work product waiver is narrower and generally requires disclosure in a way that substantially increases the likelihood an adversary will obtain it.

Judge Patti rejected the premise that generative AI is automatically a “person” to whom disclosure is made. In his words, “ChatGPT (and other generative AI programs) are tools, not persons.” Forcing disclosure of the litigant’s AI-assisted drafting, he reasoned, would amount to compelling production of her internal mental impressions.

Why this matters in the UK

These rulings do not alter privilege law in the UK. And it is important to understand what that means. There is no single, unified “UK privilege law”. Privilege operates separately under English law (in England and Wales) and Scots law, and while the core principles are similar, the categories and scope are not identical.

Under English law, legal professional privilege is generally divided into two main categories: legal advice privilege and litigation privilege. Scots law recognises comparable protections, but terminology and development differ in some areas. Courts in Britain have not yet directly addressed whether inputting confidential material into consumer AI platforms amounts to waiver. That question remains open.

Legal advice privilege protects confidential communications between a client and their solicitor for the purpose of giving or receiving legal advice, including advice from in-house lawyers acting in a legal capacity. It does not extend to other professionals, and in corporate contexts not every employee will necessarily count as “the client”. If privileged communications are entered into a consumer AI platform, the argument could be that confidentiality has been compromised by disclosure to a third party.

Litigation privilege is broader. It protects confidential communications and documents created for the sole or dominant purpose of actual or contemplated litigation, and can extend to third parties such as expert witnesses. In Scotland, material created after litigation has begun is often described as “post litem motam”. Here also, the critical issue would be whether using AI is consistent with maintaining confidentiality.

It gets more complicated by the fact that joint privilege and common interest privilege, which are clearly recognised in England and Wales but less certain in Scotland, depend on controlled sharing between parties with aligned interests. Bringing a consumer AI platform into that process could create legal uncertainty. Without prejudice privilege, which protects settlement talks, also relies on confidentiality. Using an unsecured AI tool could put that protection at risk if sensitive negotiations are shared.

The point is that privilege is fragile everywhere because it rests on confidentiality. If courts begin to treat consumer AI platforms as third parties akin to external consultants without robust confidentiality safeguards, similar arguments to those seen in the US could arise here.

Cross-border litigation sharpens the risk. A UK executive involved in US proceedings could find that material generated via consumer AI is subject to US discovery rules, regardless of how it might be characterised under English or Scots law.

The bottom line is the increasing regulatory focus on AI governance, combined with the need to protect confidential information, means this issue is unlikely to remain theoretical for long. And most firms have not yet drawn clear lines between AI experimentation and legally protected work.

The consumer vs enterprise divide

The Heppner case involved a publicly available, consumer-grade AI platform, used independently by the client, under terms of service that permitted data use and disclosure. There was no contractual confidentiality agreement. No lawyer direction. No enterprise safeguards.

The court did not indicate that all AI use destroys privilege. It did not consider private instances, zero-retention configurations, or contractually secured enterprise deployments.

This distinction is significant. Modern legal practice already relies heavily on cloud-based systems. Email, document management, and secure portals all involve third-party infrastructure. Courts have not treated their use as automatically destructive of privilege when appropriate safeguards are in place.

Whether AI will ultimately be analysed differently remains unclear. The more tightly AI is managed and overseen by lawyers, the easier it will be to argue that privilege applies.

A warning, not a ban

The Heppner ruling should be viewed as a cautionary application of longstanding principles to new technology. It does not prohibit AI in legal practice. It does not say that AI is inherently incompatible with privilege.

But it does remind lawyers that privilege depends not on intention, but on structure. If confidential information is shared with a third party without adequate safeguards, protection may be lost. The convenience of AI does not override that rule.

The legal profession is unlikely to stop using AI. There are real efficiencies in its use and clients are already using these tools. The main question is whether a firm’s AI rules actually safeguard privilege. 


When data thinks: the intersection of GDPR and AI is a guide that explores the critical role of data quality in ensuring effective compliance. It provides insights into how organisations can enhance data trust, improve decision-making, and optimise compliance processes by addressing data integrity, consistency, and accuracy. This guide is essential for teams looking to make data-driven decisions while meeting regulatory standards. Get it here.