Rethinking AI Regulation: The Case for Entity-Based Governance

Entity-Based Regulation in Frontier AI Governance

The landscape of frontier artificial intelligence (AI) governance is characterized by a critical debate regarding the focus of regulation: should it center on the core technology—the AI models—or on the technology’s uses? This study delves into the nuances of entity-based regulation as a compelling alternative to traditional model-based and use-based regulatory frameworks.

Introduction

At the heart of frontier AI policy lies a significant question: how should regulation be structured to balance innovation and safety? Proponents of use-based regulation emphasize the need to protect innovation by allowing developers the freedom to experiment without the constraints of licensing and technical standards. In contrast, supporters of model-based regulation argue that focusing on the models themselves ensures compliance burdens are placed on developers, facilitating a broader application of AI technology.

However, both paradigms face substantial challenges. Use-based regulations can impose heavy compliance costs, similar to those seen in the European Union’s AI Act and various U.S. state bills. Model-based regulations, while aiming to capture the essence of AI capabilities, may quickly become outdated due to the rapid evolution of technology.

The Case for Entity-Based Regulation

This paper proposes an alternative: entity-based regulation. This approach focuses on the large business entities developing powerful AI systems, which has been effective in sectors like financial services and insurance. By regulating the entities rather than the models or their applications, we may better address the unique risks posed by frontier AI development.

Entity-based regulation can effectively manage the compliance burden and ensure that regulations evolve alongside technological advancements. It directs attention to the organizations responsible for deploying AI, allowing for a more comprehensive understanding of the risks associated with AI systems.

Challenges of Existing Regulatory Frameworks

Both model-based and use-based regulations come with significant drawbacks. Model-based regulation, epitomized by California’s Senate Bill 1047, encountered criticism for its reliance on a specific performance threshold (training compute). Critics argue that this approach fails to account for the complexities of AI capabilities, as demonstrated by OpenAI’s innovative model, which excelled without meeting the proposed compute threshold.

Use-based regulation, such as Texas’s House Bill 1709, aimed to impose compliance requirements across a broad range of applications. This approach risks stifling innovation by imposing onerous obligations on developers and users, potentially deterring the adoption of AI technologies across various industries.

Advantages of Entity-Based Regulation

Entity-based regulation offers several advantages. It allows for a regulatory framework that can adapt to the fast-paced evolution of AI technologies while focusing on the entities responsible for their development. This shift in focus can enhance the effectiveness of regulations by addressing the broader array of risks associated with corporate practices, training methods, and deployment strategies.

Moreover, by setting regulatory triggers based on firm characteristics—such as annual spending on AI research and development—policymakers can avoid burdening smaller companies and startups. This approach ensures that regulatory resources are directed towards the most significant players in the field, where risks are concentrated.

Implementation of Entity-Based Regulation

To implement entity-based regulation effectively, several key components should be considered:

  • Transparency Requirements: Covered developers may be obligated to disclose salient features of their riskiest activities, including training and safety testing procedures.
  • Risk Management Protocols: Developers could be required to adopt protocols addressing novel risks associated with their AI technologies.
  • Organizational Structures: Regulations may necessitate the establishment of governance mechanisms to ensure proper risk management and compliance with safety standards.

Conclusion

The evolution of AI technology presents unique challenges for regulatory frameworks. While model-based and use-based regulations have their merits, they also come with significant limitations that may hinder innovation and fail to address the complexities of AI risks effectively. Entity-based regulation provides a promising alternative that allows for a more holistic understanding of the risks associated with frontier AI development. By focusing on the entities behind AI technologies, regulatory bodies can better manage risks while fostering innovation in a rapidly changing landscape.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...