Governing the Internal Use of Artificial Intelligence – Key Considerations
Many studies indicate that more than 70% of companies are making use of artificial intelligence (AI) internally. This can manifest in various forms, from developing an agentic tool or a customer-facing chatbot to enhance specific workstreams to more general-purpose applications by employees utilizing widely available AI tools.
As companies increasingly incorporate AI into their daily operations, it becomes essential to manage its use effectively. This bulletin outlines the potential risks associated with AI and provides suggestions for their management.
The Threat to Established Legal Rights
The use of AI tools often necessitates the transmission of information to a third party for processing. This aspect can conflict with traditional best practices for preserving the confidential treatment of information and the legal requirements for maintaining valuable rights.
For instance, to obtain a patent, it is crucial that the idea has not been disclosed publicly. Transmitting confidential information to an external AI tool could jeopardize this patentability. Companies must control the information shared with any external AI tool, including search assistants and AI notetaking tools.
Additionally, the principle of solicitor-client privilege is compromised if confidential information is shared with third parties. The use of AI notetaking tools in meetings could lead to discoverability in litigation, putting sensitive information at risk.
Moreover, the risk of losing confidentiality poses a significant threat. Companies must actively monitor and restrict employee use of AI tools to protect proprietary information and comply with confidentiality agreements.
The Threat to Personal Information and Security Safeguards
Introducing untested technologies can lead to security flaws within an organization’s information technology ecosystem. Without adequate safeguards, bad actors might exploit vulnerabilities in systems connected to the public-facing internet.
Compliance with legal statutes governing the use and disclosure of personal information, like Canada’s Personal Information Protection and Electronic Documents Act, is increasingly vital. As legislators grapple with these issues, companies must monitor current and anticipated legal requirements to avoid disruptions.
The Need for Transparency and Human Oversight
As regulatory frameworks around AI evolve, transparency becomes paramount. For example, users should be informed when interacting with AI versus human agents in customer service. Companies must adhere to marketing rules under the Competition Act and consumer protection laws to avoid misleading statements.
Reports of AI tools generating “hallucinated” outputs, where data lacks a basis in reality, highlight the need for human oversight. Companies should ensure that appropriate checks are in place to mitigate legal risks arising from inaccuracies.
The Need for Board-level Accountability and Governance
While AI offers significant opportunities, its associated risks require careful management. Companies should adopt an AI Governance Policy to oversee the technology’s evolving use.
Directors and officers must understand the technology behind the AI tools in use and ensure compliance with applicable regulations. Familiarity with contractual restrictions and limitations on third-party data is also crucial for compliance.
AI tools present the potential for substantial efficiencies. This bulletin emphasizes the importance of recognizing and managing the risks associated with these rapidly evolving technologies.