Understanding Global AI Governance Through a Three-Layer Framework
In the context of rapid technological advancements, the governance of artificial intelligence (AI) has emerged as a pivotal issue. The statement, “If the 20th century ran on oil and steel, the 21st century runs on compute and the minerals that feed it,” encapsulates the essence of this transition. The Trump administration’s Pax Silica Initiative, initiated on December 11, 2025, aims to secure strategic stacks of the global technology supply chain through collaboration with eight partner countries. Concurrently, the Agentic AI Foundation (AAIF), established by the Linux Foundation, comprises several AI giants like Amazon, Google, Microsoft, OpenAI, and Cloudflare, working towards a shared ecosystem for agentic AI—AI tools capable of performing tasks autonomously.
These initiatives reflect a fragmented AI governance landscape, with numerous bodies and working groups emerging alongside an increasing output of normative texts and policy papers from international organizations. Notable developments since 2024 include:
- The Council of Europe finalizing the Convention on Artificial Intelligence and a human rights risk assessment methodology for AI.
- The OECD updating its trustworthy AI principles and issuing recommendations for governmental AI use.
- The launch of the U.S. government’s Pax Silica.
- The AI Action Summit of 2025 producing a statement on inclusive and sustainable AI.
- The United Nations establishing an independent panel on AI governance.
This rapid growth brings challenges such as duplication of initiatives, overlap, interoperability issues, and potential contradictions in policies. The sheer volume of organizations complicates the engagement of AI actors in global governance, prompting the need for a coherent strategy for multistakeholder AI governance.
The Three-Layer Framework of Internet Governance
To navigate the evolving AI landscape, a multilayered framework inspired by the three-layer model of internet governance is proposed. Originally articulated by Yochai Benkler in 2000, this framework identifies three essential layers:
- Infrastructure Layer: This includes the physical and technical foundations of the internet—cables, routers, servers, and data centers. Governance focuses on access, connectivity, and reliability.
- Logical Layer: Comprising software, protocols, and standards that dictate information flow and system interoperability. Key organizations like ICANN and the Internet Engineering Task Force (IETF) play significant roles here.
- Content Layer: This layer is visible to users and includes all human and organizational interactions over the internet, such as content creation and economic activity.
Applying the Framework to AI Governance
Applying this three-layer framework to AI governance reveals:
- AI Infrastructure Layer: This includes the computing and data infrastructure essential for AI—semiconductors, GPUs, data centers, and the energy resources required to power them. Key players include Nvidia and Taiwan Semiconductor Manufacturing Company (TSMC).
- Logical Layer: Encompasses AI models and software systems. Unlike the internet’s open standards, this layer is currently dominated by proprietary models from firms such as OpenAI and Google DeepMind. It also includes open-source components like PyTorch and TensorFlow.
- Social Layer: Refers to human and institutional interactions using AI applications across various activities, from hiring tools to marketing. Applications include ChatGPT, Canva, and many others, demonstrating a democratization of AI tools.
While several frameworks exist for identifying AI activity, this simpler three-layer model provides a clearer view of how AI actors operate within the landscape. It also facilitates comparisons between global AI governance and internet governance, promoting interoperability in policy discussions across technological domains.
Observations and Governance Challenges
By mapping existing initiatives to the proposed AI layers, clustering various policy initiatives becomes feasible:
- California’s Transparency in Frontier AI Act primarily focuses on the logical layer.
- The EU AI Act addresses high-risk applications mainly in the social layer.
Additionally, prominent AI companies are working towards acquiring complete control of the AI production stack, raising concerns about governance. For instance, Microsoft and Meta have secured nuclear energy deals for their data centers, allowing them to operate independently across layers. Cross-layer collaborations, such as Nvidia and Google’s partnership, emphasize the interconnectedness of AI applications and infrastructure.
This synergy across layers leads to significant governance challenges. The call for a more substantial role for the United Nations in AI governance raises questions about the agility and efficacy of such an approach. Balancing innovation with regulation remains a critical challenge, as seen in the lengthy EU AI Act drafting process.
Ultimately, while the three-layer framework provides a foundation for understanding AI governance, there is room for expansion and refinement to accommodate the evolving nature of AI technologies. Policymakers must prioritize targeted, data-driven approaches to ensure effective governance and foster international collaboration.