AI Governance: Emerging Directors and Officers Liability Risks

AI Governance Failures as a Directors and Officers Liability Risk

AI governance is evolving from a theoretical concept discussed in boardrooms to a pressing issue that poses significant liability risks for directors and officers (D&O). As regulatory bodies transition from guidance to enforcement, inadequate oversight of artificial intelligence (AI) is increasingly recognized as a live risk that can impact company valuation, disclosure, and investor trust.

Recent Regulatory Actions

The urgency of this governance challenge is underscored by recent regulatory actions against companies such as X, related to its Grok AI chatbot. French authorities have conducted raids on offices connected to the platform, while UK regulators have initiated inquiries into data usage and content controls. While these investigations are primarily focused on technology and compliance, the ramifications are now reaching the board level.

Directors and Officers Risks

Experts in the field, such as the head of international D&O and financial institutions at Rokstone Underwriting, assert that AI governance must be viewed through the lens of D&O risk. “AI governance is always a directors and officers risk hazard,” they note, emphasizing that this applies to governance across all sectors. D&O policies inherently cover governance risks, regardless of specific exclusions.

Moreover, exposure can stem from both action and inaction. “You’re always exposed to it,” they warn, whether directors believe they are effectively managing AI or are neglecting it altogether. The potential loss of competitive advantage and subsequent devaluation of the business are real hazards linked to AI governance failures.

Intangible Assets and Valuation Risks

The challenge of governance is further complicated by the increasing reliance of modern company valuations on intangible assets like data, intellectual property, and reputation. Recent analyses indicate that around 90% of asset value in the S&P 500 is intangible as of 2020.

“If that’s reputational or R&D-driven, and there’s an issue with the reputation or a flaw in the research, that value can be erased overnight,” experts caution. The case of Twitter during Elon Musk’s acquisition serves as a poignant example, where questions about the platform’s user base and the prevalence of non-human or spam accounts significantly impacted its valuation.

Investment Pressure and Governance Challenges

The scale of investment flowing into AI technologies is creating additional D&O exposure. Major players in the S&P 500, including Apple, Microsoft, Amazon, Alphabet, and Meta, are heavily investing in AI initiatives. This influx of capital not only raises questions about operational governance but also about how companies allocate resources and justify investments to stakeholders.

The Risks of AI-Washing

The D&O risks become more pronounced when companies exaggerate their AI capabilities to inflate valuations, a practice known as AI-washing. The downfall of Builder.ai, which falsely marketed itself as a fully AI-driven platform, highlights this risk. After raising approximately $445 million and achieving a valuation of $1.5 billion, the company collapsed in 2025 when it was revealed that its AI capabilities were non-existent, and customer requests were handled by human developers.

Regulatory Fragmentation

The complexities of navigating fragmented global regulations further compound these risks. The U.S., EU, and UK have differing approaches to AI regulation, which presents a challenge for companies operating internationally. How these companies manage compliance across jurisdictions will be critical in mitigating D&O claims.

Underwriting and Market Implications

Despite the growing discourse on AI risks, the insurance underwriting sector has yet to fully integrate these concerns. Caution is increasing, particularly for startups and investment funds focusing on AI. As the market grapples with whether AI valuations have reached bubble territory, the signs of caution among D&O insurers are becoming harder to ignore.

In conclusion, AI governance is not just a future concern—it is an immediate risk for boards of directors, with real implications for financial stability and legal liability.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...