Court Ruling: AI Conversations Lack Legal Privilege

Court Rules AI Conversations Are Not Privileged: What United States v. Heppner Means for You

On February 13, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York issued an opinion addressing whether non-attorney communications with a generative AI platform are protected by the attorney-client privilege or work product doctrine. Judge Rakoff ruled that they are not.

Immediate Implications of the Ruling

This ruling has two immediate implications for anyone who uses AI tools such as ChatGPT, Claude, Gemini, or similar platforms in relation to legal or regulatory issues:

  1. No Attorney-Client Privilege: Judge Rakoff held that since AI tools do not hold law licenses, communications with them are by definition not lawyer-client communications.
  2. Loss of Privilege: Even otherwise privileged communications will lose their privileged status if shared with public AI tools.

The Case: United States v. Heppner

In the case of United States v. Heppner, the defendant—a senior executive indicted for securities fraud—used Claude, Anthropic’s publicly available AI assistant, to analyze his legal situation and outline defense strategies on his own initiative, without direction from his attorneys. During a search of Heppner’s home, the FBI seized approximately 31 documents memorializing these AI conversations. Heppner moved to exclude the documents, arguing they were protected by attorney-client privilege and work product doctrine. Judge Rakoff rejected both arguments.

Key Holdings

No Attorney-Client Privilege

The court found that communications with an AI chatbot are not protected by the attorney-client privilege for several reasons:

  • AI is not an attorney: Claude cannot form an attorney-client relationship with a user.
  • No reasonable expectation of confidentiality: Anthropic’s privacy policy allows for the collection of user data, which could be disclosed to third parties, including governmental authorities.
  • Inputting privileged information waives the privilege: Feeding advice received from counsel into a public AI tool is akin to disclosure to a third party.
  • Privilege cannot be created retroactively: Non-privileged communications do not become privileged upon being shared with counsel.

No Work Product Protection

The court also rejected the defendant’s work product argument:

  • No counsel direction: Work product protection applies only to materials prepared under counsel’s direction. Heppner generated the AI documents independently.
  • AI is not an attorney: The AI documents did not reflect the strategy and mental impressions of counsel.
  • Affecting strategy is not the same as reflecting strategy: The documents must reflect legal counsel’s strategy at the time they were created.

Key Takeaways

Based on this ruling, several key takeaways emerge:

  • No one should input work product or privileged information into public AI tools. Assume anything typed could be discovered and used against you.
  • Non-lawyers should avoid using even private AI tools for legal advice or analyses, as queries and responses are likely discoverable.
  • Understanding the difference between consumer and enterprise AI is crucial; negotiated confidentiality terms may present different scenarios.
  • Treat AI-generated analysis as discoverable; documents from AI conversations may be seized or subpoenaed.

Additional Considerations for Companies and In-House Counsel

Companies should implement or update AI usage policies to protect confidential information:

  • Protect internal investigations from potential privilege compromises.
  • Document directives from counsel during litigation preparation.
  • Advise senior leadership against using AI for legal analysis without protections.
  • Evaluate enterprise AI tools carefully, ensuring confidentiality agreements are in place.

The Bottom Line

The ruling in United States v. Heppner serves as a cautionary tale regarding the risks of using AI tools to analyze legal issues. Generative AI tools are powerful, but public versions are not confidential channels. Anyone involved in legal matters should treat these platforms with caution and take steps to protect privileged and confidential information.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...