AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

Global AI Regulations and Their Impact on Industry Leaders

In the landscape of artificial intelligence (AI), significant regulatory uncertainty looms, primarily due to a fragmented legal landscape across various countries. This fragmentation complicates the effective governance of transnational AI systems, hindering innovation and compliance.

For instance, a 2024 Nature study highlights that the absence of harmonized international law complicates AI innovation, making it difficult for organizations to discern which standards apply in different jurisdictions. The lack of robust AI governance and risk management frameworks exposes organizations to operational, ethical, and financial risks. Compliance failure can be costly, with fines under the EU AI Act reaching up to € 40 million or 7% of global revenue for severe violations.

Insights for Effective AI Governance

In a recent discussion, it was emphasized that organizations must actively manage growing AI risks by establishing governance frameworks, defining risk tolerance, and reducing aggregation risk through model diversification and task-specific fine-tuning. Two essential insights emerge for effective AI governance:

  • Building governance and accountability for AI risk: Clear risk ownership and governance frameworks are necessary to manage inevitable AI errors across jurisdictions.
  • Managing AI risk with governance and model strategy: Organizations must define risk tolerance, implement mitigation strategies beyond regulations, and diversify model architectures to reduce systematic bias and aggregation risk.

EU vs. US Approach to AI Regulation

The conversation opened with a comparison of how the EU and the US approach AI regulation differently:

  • The EU creates regulations upfront, establishing clear rules and requirements before issues arise.
  • The US typically evolves its regulatory framework through litigation, where court cases establish precedents and best practices over time.

This divergence means global companies must adapt their AI deployments to each jurisdiction’s requirements, increasing compliance burdens while also fostering clearer thinking about risks.

Accountability for AI Errors

One illustrative case involved a Canadian airline’s AI-powered chatbot providing inaccurate discount policy information, leading to a court ruling that the airline was liable for the misinformation, despite not developing the model. Such cases clarify the accountability landscape for AI outputs, guiding businesses on where to confidently adopt AI and where to exercise caution.

It is argued that responsibility for AI-related errors, particularly those stemming from generative AI, should rest primarily on AI adopters and potentially the developers, rather than the end users or those affected by the AI’s decisions. Generative AI, while beneficial, operates on probabilities rather than certainties, making errors or hallucinations inevitable.

Active Management of AI Risks

The discussion also highlighted that the perception of AI has shifted from a distant possibility to a current operational reality. This shift necessitates a proactive approach to managing AI risks. Organizations need to:

  • Define risk tolerance levels.
  • Implement mitigation measures that extend beyond mere compliance.
  • Consider AI insurance as part of their risk management strategy.

As companies develop more AI use cases and deploy interactive AI models, the cumulative risk increases. This is particularly critical in sensitive applications where AI decisions can directly impact private consumers, raising concerns about AI-driven discrimination.

Systematic Discrimination Risks

It is important to note that while human decision-making can also involve discrimination, it is often less systematic. In contrast, biased AI models can apply this bias consistently at scale, potentially leading to widespread and systematic discrimination affecting large groups of individuals.

The potential for discrimination extends beyond individual companies, especially when foundational models are involved. A flaw in one model can propagate through multiple organizations, creating an aggregation risk that can have far-reaching consequences.

Recommendations for Risk Mitigation

To mitigate these risks, the use of smaller, task-specific models is recommended. These models are easier to test and measure for errors, making them less susceptible to unpredictable performance shifts. In contrast, larger models may show inconsistencies across different use cases, as illustrated by a significant increase in error rates following a 2023 update to a popular generative model.

Companies are encouraged to consider using different foundational models or opting for slightly less complex architectures to minimize aggregation risks while maintaining performance standards. Diversification is key to reducing potential liabilities while still harnessing the benefits that AI technologies offer.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...