Protecting Privilege: Risks of Sharing Confidential Advice with Public AI Tools

Should I Input Privileged Advice Into a Public AI Tool and Can I Maintain Privilege When Doing So?

In today’s rapidly evolving technological landscape, the use of AI tools has become increasingly prevalent. However, when it comes to legal advice, a critical question arises: should one input privileged information into a public AI tool? The short answer is No—there exists a significant risk that such actions could lead to the loss of confidentiality, potentially breaching regulatory duties.

The Legal Position

For a communication to be protected by privilege, it must remain confidential. Information that is deemed to be “public property and public knowledge” cannot be classified as confidential. The confidentiality of communications hinges on the interactions between the involved parties and the specific use of the shared information.

Application to AI Systems

The implications of inputting privileged information into AI systems depend on the type of system being utilized, along with the terms of the agreement between the user and the AI provider:

  • Public AI Systems: These are free versions of popular AI tools like ChatGPT, which may utilize user data for model training. Users typically cannot negotiate terms of service.
  • Bespoke AI Systems: These systems offer tailored data privacy and confidentiality protections, allowing for agreements that limit the use of inputs for training purposes.

While public AI systems may promise privacy, this does not equate to confidentiality. Courts emphasize that confidentiality should not be confused with privacy; the expectation of privacy alone does not guarantee that information remains confidential.

Risks of Disclosure

Inputting privileged information into a public AI system increases the likelihood of losing its confidential nature. The October 2025 AI Guidance for the English judiciary warns that any information entered into a public AI chatbot should be considered published, resulting in a loss of confidentiality.

Although this guidance is not legally binding, it’s reasonable to assume that judges will view inputs into public AI systems as public knowledge. However, case law suggests that public accessibility does not automatically equate to actual public access. Therefore, there may be instances where confidential information might retain its protected status.

Best Practices

Given the substantial risks associated with using public AI systems, the following best practices are recommended:

  • Do not input confidential or privileged information into public AI systems without explicit client consent, as this may breach legal duties.
  • Stay informed about regulatory developments and changes in case law regarding AI and legal practice.
  • Establish and enforce internal policies regarding the use of AI systems within legal contexts.

In conclusion, the prudent course of action is to assume that any information entered into a public AI system will lose its confidentiality. Legal professionals must exercise caution and ensure that client information remains secure.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...