Three-Layer Framework for Global AI Governance

Understanding Global AI Governance Through a Three-Layer Framework

In the context of rapid technological advancements, the governance of artificial intelligence (AI) has emerged as a pivotal issue. The statement, “If the 20th century ran on oil and steel, the 21st century runs on compute and the minerals that feed it,” encapsulates the essence of this transition. The Trump administration’s Pax Silica Initiative, initiated on December 11, 2025, aims to secure strategic stacks of the global technology supply chain through collaboration with eight partner countries. Concurrently, the Agentic AI Foundation (AAIF), established by the Linux Foundation, comprises several AI giants like Amazon, Google, Microsoft, OpenAI, and Cloudflare, working towards a shared ecosystem for agentic AI—AI tools capable of performing tasks autonomously.

These initiatives reflect a fragmented AI governance landscape, with numerous bodies and working groups emerging alongside an increasing output of normative texts and policy papers from international organizations. Notable developments since 2024 include:

  • The Council of Europe finalizing the Convention on Artificial Intelligence and a human rights risk assessment methodology for AI.
  • The OECD updating its trustworthy AI principles and issuing recommendations for governmental AI use.
  • The launch of the U.S. government’s Pax Silica.
  • The AI Action Summit of 2025 producing a statement on inclusive and sustainable AI.
  • The United Nations establishing an independent panel on AI governance.

This rapid growth brings challenges such as duplication of initiatives, overlap, interoperability issues, and potential contradictions in policies. The sheer volume of organizations complicates the engagement of AI actors in global governance, prompting the need for a coherent strategy for multistakeholder AI governance.

The Three-Layer Framework of Internet Governance

To navigate the evolving AI landscape, a multilayered framework inspired by the three-layer model of internet governance is proposed. Originally articulated by Yochai Benkler in 2000, this framework identifies three essential layers:

  1. Infrastructure Layer: This includes the physical and technical foundations of the internet—cables, routers, servers, and data centers. Governance focuses on access, connectivity, and reliability.
  2. Logical Layer: Comprising software, protocols, and standards that dictate information flow and system interoperability. Key organizations like ICANN and the Internet Engineering Task Force (IETF) play significant roles here.
  3. Content Layer: This layer is visible to users and includes all human and organizational interactions over the internet, such as content creation and economic activity.

Applying the Framework to AI Governance

Applying this three-layer framework to AI governance reveals:

  1. AI Infrastructure Layer: This includes the computing and data infrastructure essential for AI—semiconductors, GPUs, data centers, and the energy resources required to power them. Key players include Nvidia and Taiwan Semiconductor Manufacturing Company (TSMC).
  2. Logical Layer: Encompasses AI models and software systems. Unlike the internet’s open standards, this layer is currently dominated by proprietary models from firms such as OpenAI and Google DeepMind. It also includes open-source components like PyTorch and TensorFlow.
  3. Social Layer: Refers to human and institutional interactions using AI applications across various activities, from hiring tools to marketing. Applications include ChatGPT, Canva, and many others, demonstrating a democratization of AI tools.

While several frameworks exist for identifying AI activity, this simpler three-layer model provides a clearer view of how AI actors operate within the landscape. It also facilitates comparisons between global AI governance and internet governance, promoting interoperability in policy discussions across technological domains.

Observations and Governance Challenges

By mapping existing initiatives to the proposed AI layers, clustering various policy initiatives becomes feasible:

  • California’s Transparency in Frontier AI Act primarily focuses on the logical layer.
  • The EU AI Act addresses high-risk applications mainly in the social layer.

Additionally, prominent AI companies are working towards acquiring complete control of the AI production stack, raising concerns about governance. For instance, Microsoft and Meta have secured nuclear energy deals for their data centers, allowing them to operate independently across layers. Cross-layer collaborations, such as Nvidia and Google’s partnership, emphasize the interconnectedness of AI applications and infrastructure.

This synergy across layers leads to significant governance challenges. The call for a more substantial role for the United Nations in AI governance raises questions about the agility and efficacy of such an approach. Balancing innovation with regulation remains a critical challenge, as seen in the lengthy EU AI Act drafting process.

Ultimately, while the three-layer framework provides a foundation for understanding AI governance, there is room for expansion and refinement to accommodate the evolving nature of AI technologies. Policymakers must prioritize targeted, data-driven approaches to ensure effective governance and foster international collaboration.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...