Revamping AI Governance for a Complex Future

AI Governance: The Need for a New Approach in Today’s Evolving Landscape

In recent years, artificial intelligence (AI) has shifted from a novel technology to a core component of business operations, bringing with it unprecedented risks that traditional governance frameworks cannot fully address. While boards have relied on well-established frameworks to manage data security, privacy, and compliance, these approaches fall short when it comes to AI’s unique and complex challenges.

This is driven by three main factors:

  • AI introduces novel risks,
  • new legal requirements must be integrated into the tech stack to address these risks,
  • and specialized skills, processes, and tools are essential for effective management.

1. AI: Not Just Software, But a New Frontier of Novel Risks

AI introduces specific challenges that legacy governance simply isn’t designed to address. Core to this difference is that AI differs fundamentally from traditional software due to AI’s ability to learn, adapt, and make decisions based on data, which makes it inherently less predictable than traditional, rule-based software.

Depending on the system’s complexity – from traditional machine learning models like decision trees to intricate, multi-agent systems – these risks become increasingly complex to detect and address. AI systems may exhibit bias, lack transparency, or produce misinformation and unexpected outcomes – risks that traditional models of oversight don’t anticipate.

The AI Incident Database tracks critical AI risks, including bias and discrimination across demographic factors, sector-specific failures (e.g., in healthcare, law enforcement), technical issues like generalization errors and misinformation generation, as well as operational risks. In August and September 2024, the AIID added 46 new incidents, emphasizing the need for an AI-specific governance framework that includes oversight, quality control, AI security and safety measures, and compliance updates tailored to the unique challenges AI presents.

2. Following in the Footsteps of Privacy and Security: The Need for Embedded Compliance

AI governance is following a similar path to that of privacy and security, both of which had to fight for recognition as critical, organization-wide concerns. Just as privacy and security ultimately proved their relevance and necessity, AI governance now faces similar challenges in gaining recognition as a company-wide risk area.

In addition to that, privacy and security have shown that simply having policies is not enough; legal requirements now demand that security and privacy measures be technically embedded into IT systems, products, and infrastructure from the outset – a proactive approach known as “shift left.” This practice ensures that these protections are integral to the design and function of technology rather than retrofitted after development.

The same is true for AI, as AI risk management is now mandated by a growing number of international laws such as the EU AI Act and U.S. state laws (e.g., in Utah, Colorado, and California) and must be directly integrated into the technical architecture of AI systems.

For example, California’s AB 1008 extends existing privacy protections to generative AI systems. CA AB 2013 mandates transparency regarding the data used for training AI models, pushing companies to incorporate data governance practices directly into their technical stacks. Similarly, risk assessments mandated by SB 896 signal the need for AI systems to be monitored and evaluated to mitigate threats, from infrastructural risks to potential large-scale failures.

For this, organizations need a multidisciplinary approach. Legal professionals are essential to analyze applicable laws and determine compliance scope, while machine learning engineers, data scientists, and AI governance professionals play a crucial role in translating these requirements into actionable technical and operational measures.

3. Moving Forward: Building Rigorous AI Governance

To address these new and complex risks, a fresh governance approach tailored specifically to AI is essential. It should include:

  • New Skills and Roles: Traditional governance teams may not have the specialized skills necessary to understand and manage AI systems. AI governance requires people with expertise in data science, machine learning, ethics, and regulatory compliance.
  • Processes for AI-Specific Risks: Unlike traditional software, AI models continuously evolve. Governance must therefore include processes for regular model reviews, audits, and performance evaluations.
  • Advanced Tools and Technologies: Specialized governance tools are necessary to handle the unique requirements of AI.

4. Conclusion: Adapting to New Realities in AI Governance

The rapid integration of AI into business operations has brought about risks that are unfamiliar to traditional governance structures. The unique risks posed by AI systems are not theoretical; they have significant real-world implications. Poorly governed AI systems can directly impact brand reputation, erode public trust, and result in costly legal repercussions.

Moving forward, companies must prioritize building governance structures that encompass the specialized skills, processes, and tools required to address the distinct and complex risks introduced by AI. Boards and executives who adopt this forward-looking approach to AI governance can position their organizations not only to avoid costly pitfalls but also to gain a strategic advantage in a rapidly evolving digital landscape.

Investing in AI governance is about more than compliance; it’s about ensuring that AI serves as a responsible and beneficial asset to the company and its stakeholders.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...