Rethinking AI Regulation: The Case for Entity-Based Governance

Entity-Based Regulation in Frontier AI Governance

The landscape of frontier artificial intelligence (AI) governance is characterized by a critical debate regarding the focus of regulation: should it center on the core technology—the AI models—or on the technology’s uses? This study delves into the nuances of entity-based regulation as a compelling alternative to traditional model-based and use-based regulatory frameworks.

Introduction

At the heart of frontier AI policy lies a significant question: how should regulation be structured to balance innovation and safety? Proponents of use-based regulation emphasize the need to protect innovation by allowing developers the freedom to experiment without the constraints of licensing and technical standards. In contrast, supporters of model-based regulation argue that focusing on the models themselves ensures compliance burdens are placed on developers, facilitating a broader application of AI technology.

However, both paradigms face substantial challenges. Use-based regulations can impose heavy compliance costs, similar to those seen in the European Union’s AI Act and various U.S. state bills. Model-based regulations, while aiming to capture the essence of AI capabilities, may quickly become outdated due to the rapid evolution of technology.

The Case for Entity-Based Regulation

This paper proposes an alternative: entity-based regulation. This approach focuses on the large business entities developing powerful AI systems, which has been effective in sectors like financial services and insurance. By regulating the entities rather than the models or their applications, we may better address the unique risks posed by frontier AI development.

Entity-based regulation can effectively manage the compliance burden and ensure that regulations evolve alongside technological advancements. It directs attention to the organizations responsible for deploying AI, allowing for a more comprehensive understanding of the risks associated with AI systems.

Challenges of Existing Regulatory Frameworks

Both model-based and use-based regulations come with significant drawbacks. Model-based regulation, epitomized by California’s Senate Bill 1047, encountered criticism for its reliance on a specific performance threshold (training compute). Critics argue that this approach fails to account for the complexities of AI capabilities, as demonstrated by OpenAI’s innovative model, which excelled without meeting the proposed compute threshold.

Use-based regulation, such as Texas’s House Bill 1709, aimed to impose compliance requirements across a broad range of applications. This approach risks stifling innovation by imposing onerous obligations on developers and users, potentially deterring the adoption of AI technologies across various industries.

Advantages of Entity-Based Regulation

Entity-based regulation offers several advantages. It allows for a regulatory framework that can adapt to the fast-paced evolution of AI technologies while focusing on the entities responsible for their development. This shift in focus can enhance the effectiveness of regulations by addressing the broader array of risks associated with corporate practices, training methods, and deployment strategies.

Moreover, by setting regulatory triggers based on firm characteristics—such as annual spending on AI research and development—policymakers can avoid burdening smaller companies and startups. This approach ensures that regulatory resources are directed towards the most significant players in the field, where risks are concentrated.

Implementation of Entity-Based Regulation

To implement entity-based regulation effectively, several key components should be considered:

  • Transparency Requirements: Covered developers may be obligated to disclose salient features of their riskiest activities, including training and safety testing procedures.
  • Risk Management Protocols: Developers could be required to adopt protocols addressing novel risks associated with their AI technologies.
  • Organizational Structures: Regulations may necessitate the establishment of governance mechanisms to ensure proper risk management and compliance with safety standards.

Conclusion

The evolution of AI technology presents unique challenges for regulatory frameworks. While model-based and use-based regulations have their merits, they also come with significant limitations that may hinder innovation and fail to address the complexities of AI risks effectively. Entity-based regulation provides a promising alternative that allows for a more holistic understanding of the risks associated with frontier AI development. By focusing on the entities behind AI technologies, regulatory bodies can better manage risks while fostering innovation in a rapidly changing landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...