Rethinking AI Regulation: The Case for Entity-Based Governance

Entity-Based Regulation in Frontier AI Governance

The landscape of frontier artificial intelligence (AI) governance is characterized by a critical debate regarding the focus of regulation: should it center on the core technology—the AI models—or on the technology’s uses? This study delves into the nuances of entity-based regulation as a compelling alternative to traditional model-based and use-based regulatory frameworks.

Introduction

At the heart of frontier AI policy lies a significant question: how should regulation be structured to balance innovation and safety? Proponents of use-based regulation emphasize the need to protect innovation by allowing developers the freedom to experiment without the constraints of licensing and technical standards. In contrast, supporters of model-based regulation argue that focusing on the models themselves ensures compliance burdens are placed on developers, facilitating a broader application of AI technology.

However, both paradigms face substantial challenges. Use-based regulations can impose heavy compliance costs, similar to those seen in the European Union’s AI Act and various U.S. state bills. Model-based regulations, while aiming to capture the essence of AI capabilities, may quickly become outdated due to the rapid evolution of technology.

The Case for Entity-Based Regulation

This paper proposes an alternative: entity-based regulation. This approach focuses on the large business entities developing powerful AI systems, which has been effective in sectors like financial services and insurance. By regulating the entities rather than the models or their applications, we may better address the unique risks posed by frontier AI development.

Entity-based regulation can effectively manage the compliance burden and ensure that regulations evolve alongside technological advancements. It directs attention to the organizations responsible for deploying AI, allowing for a more comprehensive understanding of the risks associated with AI systems.

Challenges of Existing Regulatory Frameworks

Both model-based and use-based regulations come with significant drawbacks. Model-based regulation, epitomized by California’s Senate Bill 1047, encountered criticism for its reliance on a specific performance threshold (training compute). Critics argue that this approach fails to account for the complexities of AI capabilities, as demonstrated by OpenAI’s innovative model, which excelled without meeting the proposed compute threshold.

Use-based regulation, such as Texas’s House Bill 1709, aimed to impose compliance requirements across a broad range of applications. This approach risks stifling innovation by imposing onerous obligations on developers and users, potentially deterring the adoption of AI technologies across various industries.

Advantages of Entity-Based Regulation

Entity-based regulation offers several advantages. It allows for a regulatory framework that can adapt to the fast-paced evolution of AI technologies while focusing on the entities responsible for their development. This shift in focus can enhance the effectiveness of regulations by addressing the broader array of risks associated with corporate practices, training methods, and deployment strategies.

Moreover, by setting regulatory triggers based on firm characteristics—such as annual spending on AI research and development—policymakers can avoid burdening smaller companies and startups. This approach ensures that regulatory resources are directed towards the most significant players in the field, where risks are concentrated.

Implementation of Entity-Based Regulation

To implement entity-based regulation effectively, several key components should be considered:

  • Transparency Requirements: Covered developers may be obligated to disclose salient features of their riskiest activities, including training and safety testing procedures.
  • Risk Management Protocols: Developers could be required to adopt protocols addressing novel risks associated with their AI technologies.
  • Organizational Structures: Regulations may necessitate the establishment of governance mechanisms to ensure proper risk management and compliance with safety standards.

Conclusion

The evolution of AI technology presents unique challenges for regulatory frameworks. While model-based and use-based regulations have their merits, they also come with significant limitations that may hinder innovation and fail to address the complexities of AI risks effectively. Entity-based regulation provides a promising alternative that allows for a more holistic understanding of the risks associated with frontier AI development. By focusing on the entities behind AI technologies, regulatory bodies can better manage risks while fostering innovation in a rapidly changing landscape.

More Insights

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...

Operationalizing Responsible AI with Python: A LLMOps Guide

In today's competitive landscape, deploying Large Language Models (LLMs) requires a robust LLMOps framework to ensure reliability and compliance. Python's rich ecosystem serves as a linchpin...

Strengthening Data Protection and AI Governance in Singapore

Singapore is proactively addressing the challenges posed by data use in the age of artificial intelligence, emphasizing the need for robust data protection measures and the importance of adapting laws...

Governance Gaps in AI Surveillance Across the Asia-Pacific

The Asia-Pacific region is experiencing a rapid expansion of AI-powered surveillance technologies, especially from Chinese companies, yet lacks the governance frameworks to regulate their use...

Embedding AI in Financial Crime Prevention: Best Practices

Generative AI is rapidly gaining attention in the financial sector, prompting firms to integrate this technology responsibly into their anti-financial crime frameworks. Experts emphasize the...