Understanding AI’s Impact on Legal Privilege

AI Privilege and Waiver: What Courts Are Actually Saying (And What They’re Not)

When Judge Jed Rakoff ruled in United States v. Heppner (S.D.N.Y. Feb. 17, 2026), that documents a criminal defendant created through exchanges with Anthropic’s Claude platform weren’t protected by attorney-client privilege or the work product doctrine, the decision generated significant attention across the legal community. Many practitioners read that ruling as a sweeping statement: using AI tools waives privilege. While great for headlines, that is an overstatement of what Heppner actually holds, and the Warmer case, which was decided a week earlier in the Eastern District of Michigan, shows why the distinction matters.

The Heppner Decision: Narrower Than It Appears

In Heppner, the trial judge ruled that documents a criminal defendant created through his own exchanges with Anthropic’s Claude platform and sent to his attorney afterwards were protected by neither attorney-client privilege nor the work product doctrine. The ruling rested on several specific facts:

  • Heppner used a public consumer AI tool that explicitly disclaims providing legal advice.
  • The platform’s privacy policy authorizes data collection, model training, and disclosure to third parties, including government authorities.
  • Heppner acted on his own initiative, without direction from his counsel.
  • The government had already seized the documents pursuant to a search warrant before the privilege question even arose.

On privilege, the court identified three independent deficiencies:

  • Claude is not a lawyer, so there was no attorney-client communication.
  • The platform’s terms defeated any reasonable expectation of confidentiality.
  • Heppner’s purpose was not to obtain legal advice from Claude, which disclaims that capacity.

On work product, the court found the documents were neither prepared by nor at the direction of counsel and did not reflect counsel’s strategy. Judge Rakoff noted that the analysis might differ if counsel had directed the AI use because the platform could then arguably function as an agent of counsel.

Most importantly, Heppner doesn’t hold that using AI tools automatically waives privilege. It holds that a non-lawyer querying a public AI tool that isn’t a lawyer and offers no confidentiality does not satisfy the foundational requirement for attorney-client privilege in the first place.

Warner: The Civil Counterweight

Look back one week. In Warner, a federal magistrate judge reached a different result in a civil case. A pro se party had used ChatGPT to prepare legal briefs in anticipation of litigation. When opposing counsel sought discovery of those materials, the court denied the request, holding the materials were not discoverable work product under Rule 26(b)(3) and independently not relevant or proportional under Rule 26(b)(1). Critically, the court also held that using AI didn’t waive work product protection, because AI tools are “tools, not persons,” and waiver requires disclosure to an adversary or in a way likely to reach one—a standard that AI use alone doesn’t meet.

One key difference here involves civil procedure vs. criminal procedure rules. Rule 26(b)(3) protects materials prepared in anticipation of litigation by a party or its representative; it doesn’t require that a lawyer prepare the materials, only that they were created in anticipation of litigation. The pro se litigant’s use of AI fell squarely within that protection, and the court saw no reason to treat AI-assisted drafting differently from any other tool a litigant might use to prepare her case.

The Real Distinction: It’s Not the AI, It’s How You Use It

This is the critical point most commentary misses. Heppner and Warner reach opposite conclusions not because one case says AI can never be privileged while the other says it always is. They reach opposite conclusions because of the specific circumstances in which the AI tools were used and the materials were sought.

In Heppner, a represented defendant used a public AI platform on his own initiative, without counsel’s direction, through a service whose terms disclaimed both legal advice and confidentiality. Those materials were then seized by the FBI pursuant to a search warrant. In Warner, a pro se litigant used AI as part of her own litigation preparation, and opposing counsel tried to compel production through a discovery request.

The procedural context matters enormously. Lawyers discussing AI privilege need to understand the circumstances under which the materials were created and how they ended up in dispute.

Extrapolating from Warner: Lawyers Using AI Tools

If a pro se party’s use of ChatGPT to prepare litigation materials qualified for work product protection in a civil case, the same logic should apply—and arguably applies even more strongly—when a lawyer uses AI tools. A lawyer directing the use of an AI tool as part of legal representation has more deliberation and control than a pro se litigant. As long as the materials are created in anticipation of litigation and not disclosed to an adversary, they should receive the same protection Warner afforded.

The use of the AI tool itself doesn’t waive privilege or work product protection. What matters is whether the materials are created in anticipation of litigation and kept confidential. This is the real area where practitioners need to focus because waiver is a concern.

The Real Risk: Public and Commercial AI Tools

There is genuine waiver exposure when using public or commercial-level AI tools. That’s because, as the Heppner decision emphasized, the platforms and their terms make it clear that user information is not private or secured, and users have no privacy guarantee. When you input confidential client information into ChatGPT or similar consumer tools, you’re disclosing that information to a third party without any contractual protection or confidentiality agreement.

If that information is later exposed through a data breach, logging, or litigation—like the ongoing OpenAI New York class action litigation—you’ve potentially waived privilege through disclosure, not through the mere act of using an AI tool.

Practical Implications

Lawyers and businesses using AI in their practice should focus on:

  1. Using enterprise AI tools or tools with explicit confidentiality agreements rather than public consumer tools.
  2. Implementing siloed or secure instances where AI interactions involving legal matters are segregated from general business operations.
  3. If AI is part of the litigation workflow, counsel should direct its use and maintain clear documentation that materials were created in anticipation of litigation, especially in civil matters where work product protections are broader.
  4. Not assuming that sharing AI outputs with counsel after the fact creates privilege. Heppner held that non-privileged materials don’t become privileged merely because they are later shared with an attorney.
  5. Avoiding disclosure of confidential client information to public AI platforms where you cannot control downstream use or exposure.
  6. Updating AI governance and acceptable use policies to specify which platforms are approved, what information may be entered, and what protocols apply when AI-generated materials touch on litigation, investigations, or regulatory matters.

The undeniable realization in both decisions is that AI prompts and results are undeniably electronically stored information (ESI), and therefore subject to preservation, civil discovery, criminal search, and subpoena production. Neither case ends the conversation about whether AI use is categorically safe or unsafe for privilege. The privilege analysis turns on the same factors it always has: whether there’s a confidential communication with a lawyer for the purpose of obtaining legal advice, whether materials are created in anticipation of litigation, and whether confidentiality is maintained. The AI tool itself is neutral; it is a powerful technology but still a technology application like Westlaw or Google or an email or text messaging platform. How you use it, who is using it, and why determines whether privilege applies. Assuming it IS privileged, the efforts you take to secure the content from publication or disclosure determine whether your privilege is waived.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...