Empowering States to Regulate AI

States Regulating AI: A Necessity Amid Congressional Inaction

The ongoing discourse surrounding the regulation of artificial intelligence (AI) has taken a pivotal turn as U.S. lawmakers contemplate drastic measures that could significantly hinder state-level regulations. The current proposal under review in the Senate includes a rule that would ban state-level AI regulation for the next decade. This move aims to accelerate AI development within the United States, positioning American technology at the forefront of global innovation.

The Risks of Federal Moratorium

Critics argue that imposing such a moratorium would stifle U.S. AI innovation and potentially jeopardize national security. Effective governance of AI is essential not only for fostering innovation but also for safeguarding the interests of the nation. State governments are already stepping up to build the necessary infrastructure for AI governance, addressing the unique needs of their constituents. The proposed federal ban undermines these efforts.

Debating the Patchwork of State Laws

While some may assert that limiting AI regulation is vital to prevent hindering innovation, the reality of congressional gridlock and partisanship renders state laws indispensable. For over a decade, Congress has failed to enact meaningful technology regulations, which has opened the door for states to fill the void. States are often more attuned to the concerns of their residents regarding AI, with fewer partisan barriers to effective policy implementation.

The Importance of State Governments in AI Governance

State governments play a crucial role in establishing the governance infrastructure for AI. This infrastructure encompasses a wide array of functions beyond simple regulatory demands. It includes:

  • Strengthening workforce capacity, ensuring a skilled labor force capable of managing AI systems.
  • Sharing information about emerging risks associated with AI technologies.
  • Building shared resources that facilitate AI experimentation and development.

For instance, a robust system of third-party auditors can aid AI companies in identifying security risks and improving internal processes. Moreover, effective information sharing can enable rapid response to potential AI-related harms.

States Leading the Way in AI Initiatives

Many states have already initiated programs to enhance their AI governance capabilities. Nearly every state has registered AI apprenticeship programs and related training to ensure a workforce adept in building and overseeing AI systems. Recent initiatives, such as New York’s proposal to establish an AI computing center, exemplify the proactive measures being taken at the state level to promote research and create job opportunities.

Furthermore, various AI bills are currently under consideration in state legislatures, with some already becoming law. These laws are crucial for experimenting with governance approaches that could later be adopted by other states, akin to California’s environmental regulations serving as a nationwide model.

Conclusion: The Need for a Balanced Approach

Imposing a moratorium on state-level AI regulation would contradict Congress’s objectives of fostering U.S. innovation and ensuring national security. A balanced approach that incorporates both state-driven governance and federal oversight is essential for cultivating a thriving and secure AI ecosystem.

Effective AI governance is a collaborative effort that necessitates the participation of both state and federal entities. As the landscape of technology continues to evolve, so too must our regulatory frameworks to keep pace with innovation while protecting the public interest.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...