California’s AI Regulation Faces Veto: Implications and Insights

California’s AI Act Vetoed

The recent decision by California’s Governor Gavin Newsom to veto the statewide artificial intelligence regulation legislation has ignited discussions around the implications of AI policy and regulation. This act, known as SB 1047, was designed to address concerns about the risks that artificial intelligence (AI) systems pose to public safety, but ultimately did not pass.

Overview of SB 1047

SB 1047, also referred to as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was enacted by the California legislature in late August 2024. The act aimed to regulate developers of very large frontier models—those requiring significant computing power or financial investment during the training phase. The bill delineated four categories of critical harms that developers needed to prevent:

  • Creation or use of weapons causing mass casualties.
  • Cyberattacks on critical infrastructure leading to mass casualties or significant financial damages.
  • Bodily injury or property damage that would be criminal if caused by humans.
  • Other serious threats to public safety and security.

To comply with SB 1047, developers were required to implement several measures throughout the development process, including:

  • Installing a “kill switch” to allow for immediate shutdown of AI systems.
  • Conducting independent third-party audits for compliance.
  • Reporting safety incidents within 72 hours.

Comparative Analysis with EU’s AI Act

While SB 1047 shares similarities with the European Union’s AI Act in focusing on safety and societal risks posed by AI systems, there are key differences. For instance, SB 1047 emphasizes responsibilities of developers of large frontier models rather than deployers, and it uniquely mandates the installation of a kill switch.

Support and Opposition

The act garnered significant support from AI model developers like Anthropic and prominent figures in the AI community, who viewed it as a necessary step for effective regulation. A reported 65% of Californians supported the legislation, reflecting public sentiment that AI developers should embed safety measures in their systems.

Conversely, major tech companies including Google and OpenAI opposed SB 1047, arguing that it could stifle innovation and that regulation should be handled at the federal level. Critics also included various AI researchers who were concerned about the implications for the availability of advanced models.

Governor Newsom’s Rationale for Veto

Governor Newsom expressed his concerns that SB 1047 could hinder innovation within California’s thriving AI sector, which is home to many leading AI companies. He emphasized the need for regulations grounded in empirical evidence rather than theoretical risks, suggesting that the bill’s focus on large models might overlook risks associated with smaller yet potentially dangerous AI systems.

Future Considerations in AI Regulation

Despite vetoing SB 1047, Governor Newsom signaled a commitment to AI safety and regulation by appointing an expert committee to explore how California can balance industry growth with public safety. This committee will include notable experts to advise on best practices for AI governance moving forward.

The debate surrounding SB 1047 underscores the complexity of regulating emerging technologies and the need for a coherent framework that addresses both innovation and safety. As AI technologies continue to evolve, so too must the regulations that govern their development and deployment, ensuring that they serve the public interest without stifling progress.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...