AI Governance: Practical AI Advice for In-House Counsel
During an annual seminar focused on in-house counsel, comprehensive guidance was shared on the strategic role of Artificial Intelligence (AI) in the contemporary business landscape. Key risks associated with AI implementation, the evolution of AI regulations, and a playbook for effective AI governance were highlighted.
The Spectrum of AI Technologies
The event commenced with an outline of various forms of AI technologies. The most common form, automation, executes predefined, rule-based tasks aimed at enhancing efficiency. Examples include:
- Thermostats that activate heating at a certain temperature.
- Workflow approvals, data entry processes, and chatbots.
Legal risks associated with automation are considered relatively low. In contrast, Generative AI poses significant risks, including potential IP infringement and factual inaccuracies. This type of AI creates new content based on existing data patterns and structures, requiring human input for effective functionality.
Emerging Agentic AI
Agentic AI represents an advanced category capable of autonomously pursuing goals and executing tasks with minimal human intervention. Examples include:
- Self-driving cars
- Virtuoso QA, an autonomous quality assurance tool for software development.
This technology amplifies legal risks associated with Generative AI by introducing new layers of legal agency and accountability.
The Evolving Regulatory Landscape
Staying abreast of the evolving regulatory landscape is critical. AI is increasingly integrated into various business sectors, transforming the role of legal departments from reactive gatekeepers to proactive strategic advisors. Legal teams must ensure that AI adoption is both strategic and defensible, which begins with understanding existing regulations.
AI regulations are dynamic, akin to data privacy laws, and vary significantly across jurisdictions. The European AI Act serves as a key framework, categorizing AI systems based on risk levels: Unacceptable, High, Limited, or Minimal Risk. Obligations scale accordingly, impacting businesses operating within the EU.
In contrast, the UK adopts a pro-innovation, context-based approach, leveraging existing regulators for oversight without creating new laws. Meanwhile, China’s regulations focus on algorithmic transparency and user consent, emphasizing state control.
In the US, regulations differ by state, with California and Colorado advancing their own privacy and automated decision-making laws. Federal agencies, such as the FTC and EEOC, provide guidance to address unfair practices and discrimination associated with AI.
Core Principles of AI Regulation
Several core principles are emerging within AI regulation:
- Transparency: The EU AI Act mandates labeling for deep fakes, and the FTC prohibits deceptive AI use in advertising.
- Fairness and Non-discrimination: The EU Act requires bias detection for high-risk systems, while the EEOC enforces the same in the US.
- Accountability: The EU mandates risk management systems for high-risk AI, while the US National Institute of Standards and Technology provides a voluntary framework.
- Human Oversight: Regulations require human intervention for high-risk systems in both the EU and China.
Legal Risks in AI Implementation
Organizations utilizing AI face several high-priority legal risks:
- Jurisdictional Compliance: The patchwork of regulations across the US complicates compliance efforts.
- Intellectual Property: AI-generated works may lack human authorship for copyright protection, leading to complex infringement questions.
- Data Privacy Risks: Using public-facing AI tools can expose sensitive information, risking data breaches.
- Algorithmic Biases: Past incidents, such as Amazon’s biased recruitment tool, highlight the risks of AI systems amplifying existing biases.
- Contractual Liabilities: Standard vendor agreements may not adequately address risks associated with AI-generated errors.
Legal professionals must understand the technology’s risks, as outlined in the American Bar Association’s model rules on competence and confidentiality. Breaching client confidentiality through AI input could result in professional sanctions.
AI, akin to a young child, seeks to provide answers, often fabricating information when uncertain. Thus, it is vital for legal professionals to verify AI-generated information to uphold their professional responsibilities.