AI Notetaking: Balancing Confidentiality and Ethics in Legal Practice

When AI Takes Notes: Protecting Privilege, Privacy, and Professional Obligations

Artificial intelligence (“AI”) tools have rapidly proliferated across the modern workplace, fundamentally changing how attorneys and other professionals conduct business and communicate with each other. AI-enhanced notetaking applications are now commonplace in corporate offices and law firms alike. While these technologies may offer efficiencies, many organizations have adopted them without fully comprehending the significant legal risks they present.

The Potential Erosion of Attorney-Client Privilege

For attorneys and their clients, perhaps the most pressing concern is the potential erosion of attorney-client privilege. When AI tools process, store, or transmit what would otherwise be privileged communications between attorneys and clients (especially when disclosed to third-party cloud providers), they may inadvertently waive the confidentiality protections that shield those communications from discovery. Independent of communications with counsel, client use of AI notetaking tools also creates vast amounts of potentially discoverable business records, exposing candid conversations that have not been historically recorded and increasing the complexity of discovery with the sheer volume of data.

Use of AI notetaking applications also implicates ethical considerations for attorneys to ensure that the use of AI tools, including notetaking applications, does not conflict with core ethical obligations, including duties of competence, independent judgment, loyalty, and confidentiality.

State Bar Opinion on Ethical Issues

On December 22, 2025, the New York City Bar Association released a formal opinion regarding a lawyer’s ethical obligations under the New York Rules of Professional Conduct when using AI notetaking applications. The opinion considers ethical issues fundamental to attorneys’ professional responsibilities regardless of jurisdiction.

The attorney’s duty of loyalty requires that they do not engage in conduct involving dishonesty, fraud, deceit, or misrepresentation. This has been interpreted by the New York City Bar Association to demand that a lawyer obtain a client’s consent when using recording devices, even in a state that only requires one-party consent. Clients have an expectation of confidentiality when speaking with their attorneys, and they may choose their words more carefully while knowing that they are being recorded. However, when the lawyer does not disclose such, the client is robbed of the choice to do so.

Attorneys must also adhere to their duty of confidentiality when using AI notetaking programs and retaining the information those programs produce. Attorneys must consider the privacy and security safeguards in place by the tool, including what data is stored, the duration of retention, rights of deletion, and whether such tools are used for training AI models. Advising clients of the risks of the loss of confidentiality and attorney-client privilege protection is important to meet ethical obligations.

Recommendations for Ethical Practice

The New York City Bar Association emphasizes that the duty of competence prohibits attorneys from relying solely on AI work product. All transcripts and summaries must be reviewed for accuracy, and attorneys must not rely solely on the AI work product. As a general matter, attorneys must also have an understanding of the technical features, limitations, and security of the tools they and their clients use and be aware of the ethical issues that arise from such use.

Additionally, the opinion acknowledges the problems inherent in attorneys’ representations of clients who use their own AI tools. When clients independently use these programs, attorneys have little control over the recordings or summaries. To mitigate negative consequences, the opinion recommends that attorneys should address the possibility of such use at the outset of the relationship.

Legal Compliance and Risks

Attorneys must not only consider the ethical duties that are owed to the client but also compliance with legal statutes involving the use of these AI tools. Recording a meeting without first obtaining the consent of all participants could violate wiretap laws. Consent requirements vary, but some states require the consent of every participant before a meeting can be recorded.

Furthermore, some states have laws protecting biometric data, including “voiceprints” derived from audio files that are used by these tools to generate transcripts and summaries. AI tools alone cannot reliably determine the legality of recording in all jurisdictions, so attorneys must be cognizant of such to assure compliance.

Case Studies Highlighting Risks

In re Otter.AI Privacy Litigation showcases the confidentiality risks that could be exposed when attorneys and clients alike use notetaker applications. This case stems from a California federal court against Otter.AI, where the plaintiff alleges the application records and transcribes conversations without receiving proper consent from all participants.

The Otter.AI privacy policy pushes responsibility for obtaining consent to users and states that it may share the user’s personal information with third parties. Use of AI notetaking and transcription tools with similar policies are likely to nullify the protection of attorney-client privilege and create ethical issues for failing to maintain confidentiality.

In United States v. Heppner, the case underscored concerns addressed in the New York City Bar Association opinion when clients use AI for their legal matters without the direction or control of counsel. The court rejected privilege and work product claims, raising significant implications for the use of AI platforms with similar disclosure policies.

Practical Considerations for Attorneys

Attorneys must approach AI notetaker applications with deliberate caution and comprehensive safeguards:

  • Obtain consent from clients to any attorney use of AI tools: Inform clients that AI notetaker or transcription tools may be used during the engagement.
  • Review all AI tool output: All notes and summaries should be reviewed by attorneys for accuracy.
  • Make clients aware of the risk of independent use of AI notetaking tools: Advise clients that independent use may result in records that are not protected by privilege.
  • Establish clear internal policies: Develop policies governing AI notetaker use, including verification procedures to ensure accuracy.
  • Cross-functional governance: A team of technology, compliance, and legal experts should review and approve AI tools to strengthen safeguards.
  • Prepare for discoverability: Advise clients on retention procedures concerning transcripts and summaries developed by AI notetaking tools.

In conclusion, as AI continues to evolve and become more embedded in daily operations, attorneys must remain vigilant in understanding how these tools function, where client data travels, and what safeguards exist to protect privileged and confidential information.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...