AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

Global AI Regulations and Their Impact on Industry Leaders

In the landscape of artificial intelligence (AI), significant regulatory uncertainty looms, primarily due to a fragmented legal landscape across various countries. This fragmentation complicates the effective governance of transnational AI systems, hindering innovation and compliance.

For instance, a 2024 Nature study highlights that the absence of harmonized international law complicates AI innovation, making it difficult for organizations to discern which standards apply in different jurisdictions. The lack of robust AI governance and risk management frameworks exposes organizations to operational, ethical, and financial risks. Compliance failure can be costly, with fines under the EU AI Act reaching up to € 40 million or 7% of global revenue for severe violations.

Insights for Effective AI Governance

In a recent discussion, it was emphasized that organizations must actively manage growing AI risks by establishing governance frameworks, defining risk tolerance, and reducing aggregation risk through model diversification and task-specific fine-tuning. Two essential insights emerge for effective AI governance:

  • Building governance and accountability for AI risk: Clear risk ownership and governance frameworks are necessary to manage inevitable AI errors across jurisdictions.
  • Managing AI risk with governance and model strategy: Organizations must define risk tolerance, implement mitigation strategies beyond regulations, and diversify model architectures to reduce systematic bias and aggregation risk.

EU vs. US Approach to AI Regulation

The conversation opened with a comparison of how the EU and the US approach AI regulation differently:

  • The EU creates regulations upfront, establishing clear rules and requirements before issues arise.
  • The US typically evolves its regulatory framework through litigation, where court cases establish precedents and best practices over time.

This divergence means global companies must adapt their AI deployments to each jurisdiction’s requirements, increasing compliance burdens while also fostering clearer thinking about risks.

Accountability for AI Errors

One illustrative case involved a Canadian airline’s AI-powered chatbot providing inaccurate discount policy information, leading to a court ruling that the airline was liable for the misinformation, despite not developing the model. Such cases clarify the accountability landscape for AI outputs, guiding businesses on where to confidently adopt AI and where to exercise caution.

It is argued that responsibility for AI-related errors, particularly those stemming from generative AI, should rest primarily on AI adopters and potentially the developers, rather than the end users or those affected by the AI’s decisions. Generative AI, while beneficial, operates on probabilities rather than certainties, making errors or hallucinations inevitable.

Active Management of AI Risks

The discussion also highlighted that the perception of AI has shifted from a distant possibility to a current operational reality. This shift necessitates a proactive approach to managing AI risks. Organizations need to:

  • Define risk tolerance levels.
  • Implement mitigation measures that extend beyond mere compliance.
  • Consider AI insurance as part of their risk management strategy.

As companies develop more AI use cases and deploy interactive AI models, the cumulative risk increases. This is particularly critical in sensitive applications where AI decisions can directly impact private consumers, raising concerns about AI-driven discrimination.

Systematic Discrimination Risks

It is important to note that while human decision-making can also involve discrimination, it is often less systematic. In contrast, biased AI models can apply this bias consistently at scale, potentially leading to widespread and systematic discrimination affecting large groups of individuals.

The potential for discrimination extends beyond individual companies, especially when foundational models are involved. A flaw in one model can propagate through multiple organizations, creating an aggregation risk that can have far-reaching consequences.

Recommendations for Risk Mitigation

To mitigate these risks, the use of smaller, task-specific models is recommended. These models are easier to test and measure for errors, making them less susceptible to unpredictable performance shifts. In contrast, larger models may show inconsistencies across different use cases, as illustrated by a significant increase in error rates following a 2023 update to a popular generative model.

Companies are encouraged to consider using different foundational models or opting for slightly less complex architectures to minimize aggregation risks while maintaining performance standards. Diversification is key to reducing potential liabilities while still harnessing the benefits that AI technologies offer.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...