Fragmented Futures: The Battle for AI Regulation

The Politics of Fragmentation and Capture in AI Regulation

The political economy of artificial intelligence (AI) regulation is increasingly shaped by the strategic behaviors of various stakeholders, including governments, technology companies, and other influential agents. As AI systems, particularly those based on large language models (LLMs), evolve, international discussions surrounding regulations intensify.

Recent developments in AI regulation highlight a growing need for understanding how different national regulatory frameworks will interact. For instance, the European Union has passed the AI Act, while the United States has seen a series of executive orders and state-level proposals, alongside China‘s stringent national data controls. This landscape raises critical questions regarding the future of global AI governance amidst a fragmented regulatory environment.

The Local Game: Four Paths for Individual Jurisdictions

The regulation of AI can be understood through a local-level model that outlines the possible paths available to different jurisdictions:

1. No Local Regulation

Some jurisdictions may opt against regulating AI, perceiving the associated harms as minimal, experiencing regulatory capture, or finding enforcement costs prohibitively high. This laissez-faire approach allows companies to operate freely, potentially exposing citizens to unregulated risks.

2. Compliance and Local Adaptation

Proactive jurisdictions may establish enforceable regulations that companies comply with, leading to the ideal scenario where businesses adapt their operations to fit local legal frameworks. This scenario is more likely when the costs of evasion are greater than compliance.

3. Partial Evasion and Regulatory Gaps

In many cases, some companies comply with regulations while others evade them. This disparity arises from governments lacking the capacity or political will to enforce rules effectively, leading to uneven consumer protections and distorted market competition.

4. Market Withdrawal

When regulations become too burdensome, companies may opt to exit the market entirely, as seen when Google and Meta withdrew from China to avoid compliance with stringent laws, highlighting the potential downsides of strict regulatory environments.

The Global Game: Four Futures for AI Governance

Expanding the analysis to the global level reveals the complexities of international regulatory interactions:

1. Multiple Local Regimes

In this scenario, many countries establish their regulatory frameworks, permitting some level of arbitrage or evasion while maintaining political autonomy. This “benign fragmentation” respects national sovereignty and allows for diverse regulatory approaches, such as stricter consumer protection in the EU compared to the U.S.

2. International Harmonization

As regulatory divergences become increasingly pronounced, the need for international harmonization may arise. Governments may seek to bridge differences through treaties and coordinated rulemaking, aiming to reduce compliance burdens for companies operating across borders.

3. Unilateral Imposition (The “Brussels Effect”)

Occasionally, a powerful jurisdiction can set de facto global standards through strict early regulation. This phenomenon, known as the “Brussels Effect,” compels companies to adopt the highest standard worldwide, as seen with Apple’s shift to USB-C chargers following EU mandates.

4. Global Fragmentation (Splinternet of AI)

In a fractured scenario, countries may enforce fully sovereign AI regimes, leading to significant regulatory divergence. Companies may be forced to create separate products for different markets, which can stifle innovation and increase costs.

Real-World Outlook on Current Events: Strategic Fragmentation

Recent events, particularly Executive Order 14179 issued by the Trump administration in January 2025, exemplify strategic fragmentation dynamics. This order rescinds previous AI safety measures and mandates the development of an “AI Action Plan,” signaling a shift towards prioritizing local industry interests.

This regulatory posture aligns with the interests of the U.S. AI industry, potentially inviting companies to relocate operations to the U.S. for a more permissive environment. For countries pursuing stricter AI regulations, this poses significant challenges in maintaining their regulatory approaches.

As jurisdictions assert their regulatory independence, the resulting fragmentation may undermine the economies of scale necessary for efficient AI development, compelling companies to produce jurisdiction-specific models. In the long term, this may lead to selective harmonization among allied countries that seek to balance sovereignty with economic efficiency.

Ultimately, the likelihood of a globally harmonized AI governance regime remains low, given its entanglement with geopolitical competition and economic sovereignty. Instead, a world characterized by strategic fragmentation is anticipated, where jurisdictions prioritize their regulatory independence while selectively cooperating in areas of mutual benefit.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...