Governance Strategies for AI Compute Power

Computing Power and the Governance of AI

The role of computing power in the advancement of artificial intelligence (AI) is paramount. This study delves into the significant increases in computing power over the past decade and explores how governance of this resource can shape the future of AI development.

Introduction

Over the last thirteen years, the amount of compute used to train leading AI systems has skyrocketed, increasing by a factor of 350 million. This unprecedented growth has facilitated major breakthroughs in AI technologies, capturing global attention and concern.

Governments and Compute Governance

As AI continues to evolve, governments have recognized the importance of compute governance. This approach involves leveraging computing power to achieve various AI policy goals, including:

  • Visibility into AI development
  • Resource allocation across AI projects
  • Enforcement of regulations

Governments can effectively monitor and track compute usage, thus gaining insights into AI development and deployment. This visibility can facilitate faster regulatory responses to emerging AI capabilities.

Properties of Compute Governance

Compute governance is particularly feasible due to four key properties:

  • Detectability: The large-scale nature of AI development requires significant resources, making it detectable.
  • Excludability: The physical nature of compute resources allows for targeted access control.
  • Quantifiability: Compute can be measured easily, allowing for effective monitoring.
  • Concentrated supply chain: The AI chip production is dominated by a few key players, simplifying governance efforts.

Using Compute for Governance Goals

Compute governance can support various AI governance objectives:

  • To increase visibility into AI projects, enabling better understanding of who uses compute resources.
  • To allocate compute resources to prioritize beneficial AI research, such as in sectors like health and climate change.
  • To enforce rules and regulations to mitigate risks associated with AI development.

The Challenges of Compute Governance

Despite its potential, compute governance is not without its challenges. It can:

  • Infringe on civil liberties if not carefully implemented.
  • Perpetuate existing power structures or entrench authoritarian regimes.
  • Risk data leakage and privacy violations as more parties gain access to compute-related information.

Ensuring Effective Compute Governance

To mitigate risks associated with compute governance, several strategies can be employed:

  • Targeted Governance: Focus governance measures on large-scale computing resources relevant to frontier AI systems.
  • Privacy Measures: Implement practices that protect personal data while allowing for effective governance.
  • Regular Reviews: Policies should be revisited periodically to ensure they remain relevant as technology evolves.
  • Substantive Safeguards: Establish controls to prevent abuses of power by regulators and other actors.

Conclusion

The governance of computing power is a crucial aspect of the AI landscape. While it presents significant opportunities for shaping the future of AI, it must be approached with caution to avoid unintended consequences. As the AI ecosystem continues to develop, policymakers must critically assess and enhance compute governance frameworks to balance innovation with ethical considerations.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...