How Activist Investors Could Turn AI Use into a Governance Test
Activist investors are poised to scrutinize the way boards utilize AI with the same intensity they apply to ESG (Environmental, Social, and Governance) factors and cybersecurity measures. The focus is not merely on efficiency but on fiduciary effectiveness.
The Role of Transparency in AI Usage
Transparency demands regarding AI will underscore whether directors are leveraging available tools to fulfill their responsibilities. According to industry insights, AI tools facilitate the collection of data, benchmarking, research, and scenario analysis. The main question will revolve around whether directors are employing AI effectively in their decision-making processes.
Turning AI Neglect into Legal Consequences
The conversation could eventually extend into courtrooms. If a board fails to adopt AI by 2027 while missing performance targets and losing shareholder value, such negligence could potentially be framed as a breach of fiduciary duty. The pivotal question will be whether directors met their obligations when they had access to tools that could improve their decision-making.
AI Fluency: A Key Determinant of Governance
Central to this discussion is the notion of AI fluency. Boards will increasingly be evaluated based on their understanding of how AI impacts various aspects of the business, including supply chains, products, market opportunities, and overall strategy. To engage in these critical discussions, boards must possess a baseline level of AI knowledge.
Integrating AI Education into Board Governance
AI education should be treated with the same importance as other governance topics, such as insider trading and risk management. Boards must provide foundational education on AI to all members, although individual directors may choose to delve deeper into the subject.
The Risks of Unmonitored AI Use
Currently, many directors are employing AI tools without formal guidelines, which poses governance and confidentiality risks. Research indicates that approximately 66% of directors are using open-source AI tools to summarize and extract insights from board materials, while only about 20% have established AI usage policies. This discrepancy exposes companies to potential risks, including issues surrounding confidentiality and attorney-client privilege.
Concerns Around AI in Meeting Transcriptions
Utilizing AI to record and transcribe board meetings introduces additional risks, especially concerning confidentiality and legal discovery. There are critical questions regarding how long transcripts are stored, who can access them, and the potential for misuse in shareholder litigation.
Fiduciary Duty and AI Usage
The failure to adopt AI can indeed be viewed as a breach of fiduciary duty, focusing more on the quality of decisions rather than the speed of execution. Just as compensation committees are expected to utilize peer benchmarking data, boards must ensure they are leveraging AI tools to gain reliable, data-driven insights.
Documenting AI Oversight
It is essential for boards to carefully document their AI oversight in board materials, reports, and minutes. AI should be interwoven into discussions around strategy, workforce, and risk management, with explicit incorporation into ongoing board development initiatives.
The Importance of Human Oversight
Despite the rise of AI, human judgment remains crucial, particularly for high-stakes governance decisions. AI-generated content should be clearly labeled to maintain transparency around its use.
The Future of AI in Governance
Boards that transition from mere awareness to comprehensive literacy in AI will distinguish themselves. AI-literate boards will integrate AI considerations into strategy, talent management, and operational processes, with management reporting on progress at each quarterly meeting.
Shifting Roles in Governance
As AI becomes more embedded in governance, the role of the general counsel is evolving. This position is becoming increasingly strategic, connected to enterprise risk management. The ethical dimensions and regulatory complexities surrounding AI will further accelerate this trend.
Cultural Barriers to AI Adoption
Cultural factors may pose significant challenges. Organizations that discourage experimentation often hinder AI adoption, whereas those that accept failure tend to facilitate easier integration of AI technologies.
As boards, investors, and regulators recalibrate their expectations, the pressing question may shift from whether directors can utilize AI to whether they can justify not using it.