Should I Input Privileged Advice Into a Public AI Tool and Can I Maintain Privilege When Doing So?
In today’s rapidly evolving technological landscape, the use of AI tools has become increasingly prevalent. However, when it comes to legal advice, a critical question arises: should one input privileged information into a public AI tool? The short answer is No—there exists a significant risk that such actions could lead to the loss of confidentiality, potentially breaching regulatory duties.
The Legal Position
For a communication to be protected by privilege, it must remain confidential. Information that is deemed to be “public property and public knowledge” cannot be classified as confidential. The confidentiality of communications hinges on the interactions between the involved parties and the specific use of the shared information.
Application to AI Systems
The implications of inputting privileged information into AI systems depend on the type of system being utilized, along with the terms of the agreement between the user and the AI provider:
- Public AI Systems: These are free versions of popular AI tools like ChatGPT, which may utilize user data for model training. Users typically cannot negotiate terms of service.
- Bespoke AI Systems: These systems offer tailored data privacy and confidentiality protections, allowing for agreements that limit the use of inputs for training purposes.
While public AI systems may promise privacy, this does not equate to confidentiality. Courts emphasize that confidentiality should not be confused with privacy; the expectation of privacy alone does not guarantee that information remains confidential.
Risks of Disclosure
Inputting privileged information into a public AI system increases the likelihood of losing its confidential nature. The October 2025 AI Guidance for the English judiciary warns that any information entered into a public AI chatbot should be considered published, resulting in a loss of confidentiality.
Although this guidance is not legally binding, it’s reasonable to assume that judges will view inputs into public AI systems as public knowledge. However, case law suggests that public accessibility does not automatically equate to actual public access. Therefore, there may be instances where confidential information might retain its protected status.
Best Practices
Given the substantial risks associated with using public AI systems, the following best practices are recommended:
- Do not input confidential or privileged information into public AI systems without explicit client consent, as this may breach legal duties.
- Stay informed about regulatory developments and changes in case law regarding AI and legal practice.
- Establish and enforce internal policies regarding the use of AI systems within legal contexts.
In conclusion, the prudent course of action is to assume that any information entered into a public AI system will lose its confidentiality. Legal professionals must exercise caution and ensure that client information remains secure.