AI-Generated Materials and Legal Privilege: Contrasting Court Rulings

AI and Legal Privilege: Lessons from Recent Rulings

Two recent decisions from U.S. federal courts have produced strikingly different outcomes on whether materials generated using AI tools are protected by privilege or work-product doctrine. Both cases involved individuals or organizations relying on consumer AI platforms, rather than legal counsel, to develop litigation strategies or prepare legal materials. These rulings highlight an emerging and unsettled legal landscape in which organizations cannot assume AI interactions will be treated as confidential or privileged.

Background

Heppner: Use of AI Creates Risk to Privilege

In United States v. Heppner (S.D.N.Y. February 17, 2026), Judge Jed S. Rakoff ruled that documents generated by a criminal defendant using Anthropic’s consumer AI tool, Claude, were not protected by privilege or work-product doctrine. The defendant, a former CEO facing securities and wire fraud charges, had used the AI platform to organize information and develop defense strategies after receiving grand jury subpoenas. He created 31 AI-generated documents which were later shared with counsel. When government agents executed a search warrant and seized these documents, the court ordered their disclosure, holding that the novelty of AI “does not mean its use is not subject to longstanding legal principles.”

The court noted that attorney-client privilege attaches to communications (1) between a client and their attorney, (2) that are intended to be, and in fact are, kept confidential, and (3) made for the purpose of obtaining or providing legal advice. The court found that the AI documents in question lacked at least two, if not all three, of these elements.

In considering the element of confidentiality, Judge Rakoff reasoned that the AI documents were not confidential, not only because Heppner communicated with a third-party AI platform, but also due to the platform’s written privacy policy. Notably, Anthropic’s policy explicitly stated that the platform collects and uses such data for its own purposes and reserves the right to disclose such data to a host of “third parties,” including “governmental regulatory authorities.”

The Warner Decision: A Contrasting Approach

Around the same time as the Heppner ruling, a Michigan federal court reached a conclusion in potential tension with Heppner on whether AI-generated materials are protected from discovery. In Warner v. Gilbarco, Inc. (E.D. Mich. February 10, 2026), a civil employment dispute, the defendants sought production of all documents concerning the plaintiff’s use of third-party AI tools (including ChatGPT) in her lawsuit. The court denied this request, emphasizing that the plaintiff, who was self-represented, was entitled to protection under the work-product doctrine, rendering the materials non-discoverable.

The court in Warner reasoned that using AI tools to prepare legal materials is analogous to traditional work product–protected activities. Critically, the court rejected the argument that employing generative AI amounted to a waiver of work-product protection, stating that “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background.” The court emphasized that waiver of work-product protection requires disclosure to an adversary or in a manner likely to reach an adversary’s hands, neither of which occurs when using an AI tool. As the court observed, “no cited case orders the production of what Defendants seek here: a litigant’s internal mental impressions reformatted through software.”

Notably, because the court was addressing the work-product doctrine (which would, in Canada, map in part onto litigation privilege) rather than attorney-client privilege, it can be argued that a different standard applies for assessing whether certain forms of disclosure eliminate the privilege. Nevertheless, the court’s characterization of generative AI as a tool and its conclusion that interaction with such tools does not destroy privilege stands in marked tension with the decision reached in Heppner.

Key Takeaways from Both Decisions

  • AI Tools Are Not Lawyers. Both courts agreed that communications with AI platforms cannot, on their own, establish attorney-client privilege. No lawyer-client relationship exists, and AI platforms usually expressly disclaim that they provide legal advice.
  • Consumer AI and Confidentiality: A Divided Landscape. Under Heppner, consumer AI platforms that reserve rights to collect user inputs, train models on submitted data, and disclose information to third parties destroy any reasonable expectation of confidentiality. Judge Rakoff likened sharing information with such a platform to discussing legal strategy in a public space. In contrast, the Warner court characterized generative AI platforms as tools rather than persons, holding that disclosure to an AI tool does not constitute a waiver of privilege.
  • Privilege Cannot Be Applied Retroactively. Sharing non-privileged, AI-generated documents with counsel after they are created does not transform them into privileged materials. Confidentiality must exist at the time of creation. This principle was applied in Heppner and remains a foundational rule.
  • Work-Product Protection: Counsel Involvement Matters. In Heppner, the absence of attorney involvement in the creation of AI-generated materials was fatal to the defendant’s work-product claim. The court noted that the outcome might have been different had counsel directed the defendant to use the AI tool. By contrast, in Warner, the court extended work-product protection to the plaintiff’s AI materials, emphasizing that such materials reflect the litigant’s mental impressions and litigation strategy, regardless of whether counsel was involved.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...