Entity-Based Regulation in Frontier AI Governance
The landscape of frontier artificial intelligence (AI) governance is characterized by a critical debate regarding the focus of regulation: should it center on the core technology—the AI models—or on the technology’s uses? This study delves into the nuances of entity-based regulation as a compelling alternative to traditional model-based and use-based regulatory frameworks.
Introduction
At the heart of frontier AI policy lies a significant question: how should regulation be structured to balance innovation and safety? Proponents of use-based regulation emphasize the need to protect innovation by allowing developers the freedom to experiment without the constraints of licensing and technical standards. In contrast, supporters of model-based regulation argue that focusing on the models themselves ensures compliance burdens are placed on developers, facilitating a broader application of AI technology.
However, both paradigms face substantial challenges. Use-based regulations can impose heavy compliance costs, similar to those seen in the European Union’s AI Act and various U.S. state bills. Model-based regulations, while aiming to capture the essence of AI capabilities, may quickly become outdated due to the rapid evolution of technology.
The Case for Entity-Based Regulation
This paper proposes an alternative: entity-based regulation. This approach focuses on the large business entities developing powerful AI systems, which has been effective in sectors like financial services and insurance. By regulating the entities rather than the models or their applications, we may better address the unique risks posed by frontier AI development.
Entity-based regulation can effectively manage the compliance burden and ensure that regulations evolve alongside technological advancements. It directs attention to the organizations responsible for deploying AI, allowing for a more comprehensive understanding of the risks associated with AI systems.
Challenges of Existing Regulatory Frameworks
Both model-based and use-based regulations come with significant drawbacks. Model-based regulation, epitomized by California’s Senate Bill 1047, encountered criticism for its reliance on a specific performance threshold (training compute). Critics argue that this approach fails to account for the complexities of AI capabilities, as demonstrated by OpenAI’s innovative model, which excelled without meeting the proposed compute threshold.
Use-based regulation, such as Texas’s House Bill 1709, aimed to impose compliance requirements across a broad range of applications. This approach risks stifling innovation by imposing onerous obligations on developers and users, potentially deterring the adoption of AI technologies across various industries.
Advantages of Entity-Based Regulation
Entity-based regulation offers several advantages. It allows for a regulatory framework that can adapt to the fast-paced evolution of AI technologies while focusing on the entities responsible for their development. This shift in focus can enhance the effectiveness of regulations by addressing the broader array of risks associated with corporate practices, training methods, and deployment strategies.
Moreover, by setting regulatory triggers based on firm characteristics—such as annual spending on AI research and development—policymakers can avoid burdening smaller companies and startups. This approach ensures that regulatory resources are directed towards the most significant players in the field, where risks are concentrated.
Implementation of Entity-Based Regulation
To implement entity-based regulation effectively, several key components should be considered:
- Transparency Requirements: Covered developers may be obligated to disclose salient features of their riskiest activities, including training and safety testing procedures.
- Risk Management Protocols: Developers could be required to adopt protocols addressing novel risks associated with their AI technologies.
- Organizational Structures: Regulations may necessitate the establishment of governance mechanisms to ensure proper risk management and compliance with safety standards.
Conclusion
The evolution of AI technology presents unique challenges for regulatory frameworks. While model-based and use-based regulations have their merits, they also come with significant limitations that may hinder innovation and fail to address the complexities of AI risks effectively. Entity-based regulation provides a promising alternative that allows for a more holistic understanding of the risks associated with frontier AI development. By focusing on the entities behind AI technologies, regulatory bodies can better manage risks while fostering innovation in a rapidly changing landscape.