AI-Generated Content: Balancing Privilege and Work Product Protections

AI, Privilege, and Work Product: Conflicting Federal Decisions Create a New Risk Frontier

Two recent federal court decisions—issued one week apart—reach sharply divergent conclusions on whether materials generated using artificial intelligence (“AI”) platforms are protected by the attorney-client privilege or the work product doctrine.

Case Overview

In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), U.S. District Court Judge Rakoff held that exchanges with Claude (Anthropic’s publicly available AI tool)—that a criminal defendant used independent of his counsel to analyze his exposure and defense strategy—were neither privileged nor protected work product. The Court found that Claude is not an attorney, that Anthropic’s privacy policy (which permits data collection, model training, and third-party disclosure) destroyed any reasonable expectation of confidentiality, and that materials created without counsel’s direction did not qualify for work product protection.

Conversely, in Warner v. Gilbarco (E.D. Mich. Feb. 10, 2026), U.S. Magistrate Judge Patti denied defendants’ motion to compel the production of a pro se plaintiff’s ChatGPT-assisted materials. The Court reasoned that AI platforms are “tools, not persons,” and that a waiver of work-product protections requires disclosure to an adversary (not to software). The compelling of such discovery “would nullify work-product protection in nearly every modern drafting environment.”

The Core Tension

These decisions underscore that the law governing AI use in litigation is unsettled and fact-sensitive. Litigants, in-house counsel, and compliance teams should act with care in deploying AI in connection with investigations and disputes, including taking privilege and work product issues into account.

Key Takeaways

  1. Treat AI interactions as potentially discoverable. Just as email reshaped discovery, generative AI will follow. Assume that prompts and outputs are logged on third-party servers and may be subject to subpoenas or discovery requests, regardless of privilege arguments.
  2. Avoid inputting privileged or confidential information into consumer AI tools. Employees and clients must understand that communications with public AI platforms are not confidential and should not be treated as substitutes for privileged communications with attorneys.
  3. Conduct mandatory legal review of platform terms before use. Before using any AI platform for litigation-related tasks, evaluate its privacy policy and terms of service to ensure security and confidentiality are maintained.
  4. Prefer enterprise AI configurations with stronger contractual confidentiality protections. Open models like generally available consumer-grade AI are governed by broad terms that disclaim confidentiality.
  5. Use AI at counsel’s direction and document the workflow. Heppner signals that attorney direction may be critical. Document that AI use is at counsel’s direction to strengthen work product arguments.
  6. Preserve work product arguments distinct from privilege. Work product protection may still apply where materials reflect litigation strategies or mental impressions that have not been disclosed to an adversary.
  7. Be prepared to resist intrusive AI-related discovery. Parties should argue that broad requests for AI prompts and outputs are disproportionate and irrelevant to the merits.
  8. Establish cross-departmental governance. Legal, compliance, IT, and business leadership should jointly oversee AI protocols and maintain clear channels for raising privilege concerns.

Looking Ahead

Courts are actively grappling with how traditional privilege doctrines apply to generative AI. One model (Heppner) emphasizes platform privacy terms, third-party disclosure risks, and the absence of attorney oversight; the other (Warner) focuses on the functional role of AI as a drafting tool. Companies should pay attention to what happens when privileged material is fed into an AI tool, as more litigation on this front is expected soon.

In summary, the question is no longer whether AI use implicates privilege, but how it is used, and whether that use preserves the structural conditions that privilege requires.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...