AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

Global AI Regulations and Their Impact on Industry Leaders

In the landscape of artificial intelligence (AI), significant regulatory uncertainty looms, primarily due to a fragmented legal landscape across various countries. This fragmentation complicates the effective governance of transnational AI systems, hindering innovation and compliance.

For instance, a 2024 Nature study highlights that the absence of harmonized international law complicates AI innovation, making it difficult for organizations to discern which standards apply in different jurisdictions. The lack of robust AI governance and risk management frameworks exposes organizations to operational, ethical, and financial risks. Compliance failure can be costly, with fines under the EU AI Act reaching up to € 40 million or 7% of global revenue for severe violations.

Insights for Effective AI Governance

In a recent discussion, it was emphasized that organizations must actively manage growing AI risks by establishing governance frameworks, defining risk tolerance, and reducing aggregation risk through model diversification and task-specific fine-tuning. Two essential insights emerge for effective AI governance:

  • Building governance and accountability for AI risk: Clear risk ownership and governance frameworks are necessary to manage inevitable AI errors across jurisdictions.
  • Managing AI risk with governance and model strategy: Organizations must define risk tolerance, implement mitigation strategies beyond regulations, and diversify model architectures to reduce systematic bias and aggregation risk.

EU vs. US Approach to AI Regulation

The conversation opened with a comparison of how the EU and the US approach AI regulation differently:

  • The EU creates regulations upfront, establishing clear rules and requirements before issues arise.
  • The US typically evolves its regulatory framework through litigation, where court cases establish precedents and best practices over time.

This divergence means global companies must adapt their AI deployments to each jurisdiction’s requirements, increasing compliance burdens while also fostering clearer thinking about risks.

Accountability for AI Errors

One illustrative case involved a Canadian airline’s AI-powered chatbot providing inaccurate discount policy information, leading to a court ruling that the airline was liable for the misinformation, despite not developing the model. Such cases clarify the accountability landscape for AI outputs, guiding businesses on where to confidently adopt AI and where to exercise caution.

It is argued that responsibility for AI-related errors, particularly those stemming from generative AI, should rest primarily on AI adopters and potentially the developers, rather than the end users or those affected by the AI’s decisions. Generative AI, while beneficial, operates on probabilities rather than certainties, making errors or hallucinations inevitable.

Active Management of AI Risks

The discussion also highlighted that the perception of AI has shifted from a distant possibility to a current operational reality. This shift necessitates a proactive approach to managing AI risks. Organizations need to:

  • Define risk tolerance levels.
  • Implement mitigation measures that extend beyond mere compliance.
  • Consider AI insurance as part of their risk management strategy.

As companies develop more AI use cases and deploy interactive AI models, the cumulative risk increases. This is particularly critical in sensitive applications where AI decisions can directly impact private consumers, raising concerns about AI-driven discrimination.

Systematic Discrimination Risks

It is important to note that while human decision-making can also involve discrimination, it is often less systematic. In contrast, biased AI models can apply this bias consistently at scale, potentially leading to widespread and systematic discrimination affecting large groups of individuals.

The potential for discrimination extends beyond individual companies, especially when foundational models are involved. A flaw in one model can propagate through multiple organizations, creating an aggregation risk that can have far-reaching consequences.

Recommendations for Risk Mitigation

To mitigate these risks, the use of smaller, task-specific models is recommended. These models are easier to test and measure for errors, making them less susceptible to unpredictable performance shifts. In contrast, larger models may show inconsistencies across different use cases, as illustrated by a significant increase in error rates following a 2023 update to a popular generative model.

Companies are encouraged to consider using different foundational models or opting for slightly less complex architectures to minimize aggregation risks while maintaining performance standards. Diversification is key to reducing potential liabilities while still harnessing the benefits that AI technologies offer.

More Insights

Southeast Asia’s Unique Approach to AI Safety Governance

Southeast Asia's approach to AI safety governance combines localized regulation with regional coordination, addressing the diverse cultural and political landscape of the region. The report outlines...

Comparing AI Action Plans: U.S. vs. China

In July, both the United States and China unveiled their national AI Action Plans, showcasing different approaches to AI development and governance. Despite their contrasting ideologies, the two...

Private Governance: The Future of AI Regulation

Private governance and regulatory sandboxes are essential for promoting democracy, efficiency, and innovation in AI regulation. This approach allows for agile and accountable experimentation that can...

Egypt Champions Ethical AI for Inclusive Development

Egypt's Minister of Planning and Economic Development, Rania Al-Mashat, emphasized the importance of robust governance frameworks for artificial intelligence to ensure it benefits society ethically...

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may...

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It...

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A...

AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

The article discusses the significant regulatory uncertainty surrounding global AI oversight and the importance of building governance frameworks to manage AI risks. Michael Berger from Munich Re...

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...