Implications of AI Use on Legal Privilege in Court Rulings

When AI Isn’t Privileged: SDNY Rules Generative AI Documents Not Protected

Executive Summary

The independent, unsupervised use of generative AI to analyze legal exposure may not be privileged. A federal court recently ruled that a defendant’s AI prompts and outputs relating to a criminal investigation were not protected after being seized pursuant to a search warrant.

Platform terms matter. If an AI provider reserves rights to retain, train on, or disclose user inputs, courts may find confidentiality—and therefore privilege—compromised.

Structure AI use under counsel’s direction. The ruling leaves open whether counsel-directed enterprise AI use on a secure platform with strong confidentiality terms may be treated differently. Governance and process may be outcome-determinative.

On February 10, 2026, U.S. District Judge Jed Rakoff of the Southern District of New York issued a ruling stating that a defendant’s use of generative AI to analyze legal exposure is not protected under attorney-client privilege or the work product doctrine. This decision carries significant implications as clients and non-lawyers increasingly use generative AI tools to assess legal risk, despite disclaimers from AI companies that their tools do not provide legal advice. The use of public AI tools creates substantial privilege risks because they often lack confidentiality protections and are generally not used under counsel’s direction.

Judge Rakoff’s Ruling

In United States v. Heppner, a federal securities fraud case, defendant Bradley Heppner utilized a third-party generative AI tool, Anthropic’s Claude, to input prompts regarding the government’s investigation and his potential legal exposure. These prompts included facts he learned from his counsel, and the platform generated written responses.

During a search of Heppner’s residence, numerous electronic devices were seized, which contained approximately thirty-one AI-generated documents. The defense asserted privilege over these materials, claiming they were created for discussions with counsel and were later shared with them. However, the defense conceded that these materials were prepared independently and not at counsel’s direction.

The government sought a ruling that these materials were neither privileged nor protected work product. The court granted that request.

The Court’s Core Conclusions

Judge Rakoff concluded that the AI documents were not protected by attorney-client privilege or the work product doctrine for several reasons:

  • AI Platforms Are Not Attorneys: Attorney-client privilege protects confidential communications between a client and counsel for obtaining legal advice. The AI documents were not communications with an attorney and were not created for that purpose. When queried about legal matters, Anthropic’s Claude warns users to consult a “qualified attorney.” Thus, independent usage of an AI tool was treated as research activity, not privileged communication.
  • Confidentiality Was Not Preserved: Sharing inputs and outputs with a consumer AI platform that retains rights to user data indicates those communications are not confidential. Claude is publicly accessible and collects data from user prompts, undermining any claim to confidentiality. The court did not address scenarios where the AI program is a closed enterprise environment aimed at protecting confidentiality.
  • Work Product Requires Attorney Direction: The work product doctrine protects materials prepared by or at the direction of counsel in anticipation of litigation. Since the defendant acted independently, the AI materials did not qualify as work product. The court noted that sharing AI output with counsel does not retroactively confer privilege.

Implications for Companies and Executives

As executives and compliance leaders increasingly utilize generative AI tools to analyze legal and regulatory exposure, this ruling suggests that, without careful structuring, those interactions may not be privileged and could be discoverable in future proceedings.

Three practical points emerge:

  • Independent AI use can create discoverable material: Using AI to evaluate legal exposure or regulatory issues may generate non-privileged documents, even if it’s preparatory for discussions with counsel.
  • Enterprise governance matters: If platform terms allow for data retention or disclosure to regulators, privilege claims may fail. Governance should weigh litigation risk alongside cybersecurity and privacy.
  • Structure and process may be outcome-determinative: While this decision did not address counsel-directed use on a secure enterprise platform, that distinction could be crucial, especially if counsel supervises prompts as part of litigation preparation.

Practical Guidance

Treat AI as a sophisticated, potentially disclosure-prone tool, rather than a trusted legal advisor.

Before using AI tools, consider confidentiality. Ensure that the AI tool is a closed enterprise program designed to protect client data. Understand how AI models train their systems and whether they utilize a closed set of documents from a single client or all user-collected data. Be cautious with publicly available AI programs, as they may not offer the same confidentiality protections as internal tools.

Companies should involve counsel before using AI tools for legal analysis, establish formal protocols for AI use in investigations, review AI platform terms for confidentiality, and avoid uploading privileged communications without oversight.

Looking Ahead

Courts are unlikely to expand privilege doctrines simply due to the sophistication or widespread use of AI tools. Traditional requirements—confidentiality, attorney involvement, and preparation at counsel’s direction—remain essential.

As AI becomes integrated into corporate governance and compliance, the preservation of privilege will depend more on usage rather than technology. For boards, executives, and compliance leaders, this ruling serves as a reminder to structure AI use with the same caution applied to any sensitive legal communications.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...