Bridging the Gaps in AI Governance

The Need for and Pathways to AI Regulatory and Technical Interoperability

As we stand at a critical juncture in AI’s development, a significant governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles a patchwork of fragmented regulations, technical and non-technical standards, and frameworks that complicate the global deployment of AI systems, making it increasingly difficult and costly.

The Fragmented AI Governance Landscape

Today’s global AI governance environment is characterized by diverging regulatory approaches across major economies. The EU has positioned itself as a first-mover with its AI Act, implementing a binding, risk-based classification system that bans certain AI applications outright and imposes stringent obligations on high-risk systems such as biometric identification and critical infrastructure. This AI Act stands in stark contrast to the UK’s sector-specific approach, which avoids new legislation in favor of empowering existing regulators to apply five cross-cutting principles tailored to industries like healthcare and finance. Meanwhile, the US lacks comprehensive federal AI legislation, resulting in a chaotic mix of state-level laws and non-binding federal guidelines.

States like Colorado have enacted laws with “duty of care” standards to prevent algorithmic discrimination, while others have passed various sector-specific regulations. The recent shift in US federal leadership has further complicated matters, with the Trump administration’s 2025 Executive Order focusing on “sustaining and enhancing US AI dominance.” In contrast, China combines state-driven ethical guidelines with hard laws targeting specific technologies like generative AI, emphasizing alignment with national security and government values.

Aside from hard laws, soft law initiatives add another layer of complexity to the fragmented AI governance landscape. Recent datasets capture over 600 AI soft law programs and 1400+ AI and AI-related standards across organizations like IEEE, ISO, ETSI, and ITU. While some efforts, like ISO 42001 and OECD’s AI Principles, have gained considerable traction, the sheer number of competing soft laws has created significant compliance burdens for organizations aiming to develop or deploy their AI systems globally and responsibly.

Why AI Regulatory and Technical Interoperability Matters

This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. For instance, when a healthcare algorithm developed in compliance with the EU’s strict data governance rules could potentially violate US state laws permitting broader biometric data collection, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial; interoperable frameworks could boost cross-border AI services by 11-44% annually, according to APEC’s 2023 findings.

Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage. Furthermore, technical fragmentation perpetuates closed ecosystems. Without standardized interfaces for AI-to-AI communication, most systems remain siloed within corporate boundaries, stifling competition, user choice, edge-based innovation, and trust in AI systems.

When safety, fairness, and privacy rules vary dramatically between jurisdictions, users cannot confidently rely on AI applications regardless of where they were developed. Establishing shared regulatory and technical principles ensures that users in different markets can trust AI applications across borders.

Pathways to AI Interoperability

Fortunately, there are four promising pathways to advance both regulatory and technical interoperability. These pathways do not require completely uniform global regulations but rather focus on creating coherence that enables cross-border AI interactions while respecting national priorities:

  1. Incorporation of Global Standards: Governments should incorporate global standards and frameworks into domestic regulations. Rather than developing regulations from scratch, policymakers can reference established international standards like ISO/IEC 42001 in their domestic regulation. This incorporation by reference approach creates natural alignment in compliance mechanisms while still allowing for national customization.
  2. Open Technical Standards: We need open technical standards for AI-to-AI communication. While corporate APIs might offer short-term solutions, true open standards developed through multistakeholder bodies like IEEE, W3C, or ISO/IEC would create a level playing field. Governments can incentivize adoption through procurement policies or tax benefits.
  3. Piloting Interoperability Frameworks: Testing interoperability frameworks in high-impact sectors would validate approaches before broader implementation. Multilateral regulatory sandboxes provide safe environments to test regulatory and technical interoperability approaches across borders.
  4. Stronger Economic and Trade Cases: Building stronger economic and trade cases for interoperability will stimulate political will. Integrating AI governance provisions into trade agreements, as seen in the USMCA’s Digital Trade Chapter, creates mechanisms for regulatory coherence while fostering digital trade.

The Path Forward

Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals.

International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction.

The alternative—a deeply fragmented AI landscape—would not only slow innovation but also entrench the power of dominant players and deepen digital divides. The time for concerted action on AI interoperability is now, while governance approaches are still evolving. By pursuing regulatory and technical interoperability together, we can pave the way for AI to fulfill its promise as a technology that benefits humanity across borders rather than deepening existing divides.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...