Bridging the Gaps in AI Governance

The Need for and Pathways to AI Regulatory and Technical Interoperability

As we stand at a critical juncture in AI’s development, a significant governance challenge is emerging that could stifle innovation and create global digital divides. The current AI governance landscape resembles a patchwork of fragmented regulations, technical and non-technical standards, and frameworks that complicate the global deployment of AI systems, making it increasingly difficult and costly.

The Fragmented AI Governance Landscape

Today’s global AI governance environment is characterized by diverging regulatory approaches across major economies. The EU has positioned itself as a first-mover with its AI Act, implementing a binding, risk-based classification system that bans certain AI applications outright and imposes stringent obligations on high-risk systems such as biometric identification and critical infrastructure. This AI Act stands in stark contrast to the UK’s sector-specific approach, which avoids new legislation in favor of empowering existing regulators to apply five cross-cutting principles tailored to industries like healthcare and finance. Meanwhile, the US lacks comprehensive federal AI legislation, resulting in a chaotic mix of state-level laws and non-binding federal guidelines.

States like Colorado have enacted laws with “duty of care” standards to prevent algorithmic discrimination, while others have passed various sector-specific regulations. The recent shift in US federal leadership has further complicated matters, with the Trump administration’s 2025 Executive Order focusing on “sustaining and enhancing US AI dominance.” In contrast, China combines state-driven ethical guidelines with hard laws targeting specific technologies like generative AI, emphasizing alignment with national security and government values.

Aside from hard laws, soft law initiatives add another layer of complexity to the fragmented AI governance landscape. Recent datasets capture over 600 AI soft law programs and 1400+ AI and AI-related standards across organizations like IEEE, ISO, ETSI, and ITU. While some efforts, like ISO 42001 and OECD’s AI Principles, have gained considerable traction, the sheer number of competing soft laws has created significant compliance burdens for organizations aiming to develop or deploy their AI systems globally and responsibly.

Why AI Regulatory and Technical Interoperability Matters

This fragmentation creates serious problems for innovation, safety, and equitable access to AI technologies. For instance, when a healthcare algorithm developed in compliance with the EU’s strict data governance rules could potentially violate US state laws permitting broader biometric data collection, the global deployment of beneficial AI systems becomes increasingly complicated. The economic costs are substantial; interoperable frameworks could boost cross-border AI services by 11-44% annually, according to APEC’s 2023 findings.

Complex and incoherent AI rules disproportionately impact startups and small and medium-sized enterprises that lack the resources to navigate fragmented compliance regimes, essentially giving large enterprises an unfair advantage. Furthermore, technical fragmentation perpetuates closed ecosystems. Without standardized interfaces for AI-to-AI communication, most systems remain siloed within corporate boundaries, stifling competition, user choice, edge-based innovation, and trust in AI systems.

When safety, fairness, and privacy rules vary dramatically between jurisdictions, users cannot confidently rely on AI applications regardless of where they were developed. Establishing shared regulatory and technical principles ensures that users in different markets can trust AI applications across borders.

Pathways to AI Interoperability

Fortunately, there are four promising pathways to advance both regulatory and technical interoperability. These pathways do not require completely uniform global regulations but rather focus on creating coherence that enables cross-border AI interactions while respecting national priorities:

  1. Incorporation of Global Standards: Governments should incorporate global standards and frameworks into domestic regulations. Rather than developing regulations from scratch, policymakers can reference established international standards like ISO/IEC 42001 in their domestic regulation. This incorporation by reference approach creates natural alignment in compliance mechanisms while still allowing for national customization.
  2. Open Technical Standards: We need open technical standards for AI-to-AI communication. While corporate APIs might offer short-term solutions, true open standards developed through multistakeholder bodies like IEEE, W3C, or ISO/IEC would create a level playing field. Governments can incentivize adoption through procurement policies or tax benefits.
  3. Piloting Interoperability Frameworks: Testing interoperability frameworks in high-impact sectors would validate approaches before broader implementation. Multilateral regulatory sandboxes provide safe environments to test regulatory and technical interoperability approaches across borders.
  4. Stronger Economic and Trade Cases: Building stronger economic and trade cases for interoperability will stimulate political will. Integrating AI governance provisions into trade agreements, as seen in the USMCA’s Digital Trade Chapter, creates mechanisms for regulatory coherence while fostering digital trade.

The Path Forward

Achieving regulatory and technical interoperability will not happen overnight, nor will it emerge spontaneously from market forces alone. The incumbents’ natural incentive is to protect their AI silos from encroachment. What is needed is a networked, multistakeholder approach that includes governments, industry, civil society, and international organizations working together on specific and achievable goals.

International initiatives like the G7 Hiroshima AI Process, the UN’s High-Level Advisory Body on AI, and the International Network of AI Safety Institutes offer promising venues for networked multistakeholder coordination. These efforts must avoid pursuing perfect uniformity and instead focus on creating coherence that enables AI systems and services to function across borders without unnecessary friction.

The alternative—a deeply fragmented AI landscape—would not only slow innovation but also entrench the power of dominant players and deepen digital divides. The time for concerted action on AI interoperability is now, while governance approaches are still evolving. By pursuing regulatory and technical interoperability together, we can pave the way for AI to fulfill its promise as a technology that benefits humanity across borders rather than deepening existing divides.

More Insights

Driving Responsible AI: The Business Case for Ethical Innovation

Philosophical principles and regulatory frameworks have often dominated discussions on AI ethics, failing to resonate with key decision-makers. This article identifies three primary drivers—top-down...

Streamlining AI Regulations for Competitive Advantage in Europe

The General Data Protection Regulation (GDPR) complicates the necessary use of data and AI, hindering companies from leveraging AI's potential effectively. To enhance European competitiveness, there...

Colorado’s AI Act: Legislative Setback and Compliance Challenges Ahead

The Colorado Legislature recently failed to amend the Artificial Intelligence Act, originally passed in 2024, which imposes strict regulations on high-risk AI systems. Proposed amendments aimed to...

AI in Recruitment: Balancing Innovation and Compliance

AI is revolutionizing recruitment by streamlining processes such as resume screening and candidate engagement, but it also raises concerns about bias and compliance with regulations. While the EU has...

EU Member States Struggle to Fund AI Act Enforcement

EU policy adviser Kai Zenner has warned that many EU member states are facing financial difficulties and a shortage of expertise necessary to enforce the AI Act effectively. As the phased...

Colorado’s AI Act: Key Consumer Protections Unveiled

The Colorado Artificial Intelligence Act (CAIA) requires developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination and disclose when consumers are...

Smart AI Regulation: Safeguarding Our Future

Sen. Gounardes emphasizes the urgent need for smart and responsible AI regulation to safeguard communities and prevent potential risks associated with advanced AI technologies. The RAISE Act aims to...

Responsible AI: The Key to Trust and Innovation

At SAS Innovate 2025, Reggie Townsend emphasized the importance of ethics and governance in the use of AI within enterprises, stating that responsible innovation begins before coding. He highlighted...

Neurotechnologies and the EU AI Act: Legal Implications and Challenges

The article discusses the implications of the EU Artificial Intelligence Act on neurotechnologies, particularly in the context of neurorights and the regulation of AI systems. It highlights the...