California’s AI Regulation Faces Veto: Implications and Insights

California’s AI Act Vetoed

The recent decision by California’s Governor Gavin Newsom to veto the statewide artificial intelligence regulation legislation has ignited discussions around the implications of AI policy and regulation. This act, known as SB 1047, was designed to address concerns about the risks that artificial intelligence (AI) systems pose to public safety, but ultimately did not pass.

Overview of SB 1047

SB 1047, also referred to as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was enacted by the California legislature in late August 2024. The act aimed to regulate developers of very large frontier models—those requiring significant computing power or financial investment during the training phase. The bill delineated four categories of critical harms that developers needed to prevent:

  • Creation or use of weapons causing mass casualties.
  • Cyberattacks on critical infrastructure leading to mass casualties or significant financial damages.
  • Bodily injury or property damage that would be criminal if caused by humans.
  • Other serious threats to public safety and security.

To comply with SB 1047, developers were required to implement several measures throughout the development process, including:

  • Installing a “kill switch” to allow for immediate shutdown of AI systems.
  • Conducting independent third-party audits for compliance.
  • Reporting safety incidents within 72 hours.

Comparative Analysis with EU’s AI Act

While SB 1047 shares similarities with the European Union’s AI Act in focusing on safety and societal risks posed by AI systems, there are key differences. For instance, SB 1047 emphasizes responsibilities of developers of large frontier models rather than deployers, and it uniquely mandates the installation of a kill switch.

Support and Opposition

The act garnered significant support from AI model developers like Anthropic and prominent figures in the AI community, who viewed it as a necessary step for effective regulation. A reported 65% of Californians supported the legislation, reflecting public sentiment that AI developers should embed safety measures in their systems.

Conversely, major tech companies including Google and OpenAI opposed SB 1047, arguing that it could stifle innovation and that regulation should be handled at the federal level. Critics also included various AI researchers who were concerned about the implications for the availability of advanced models.

Governor Newsom’s Rationale for Veto

Governor Newsom expressed his concerns that SB 1047 could hinder innovation within California’s thriving AI sector, which is home to many leading AI companies. He emphasized the need for regulations grounded in empirical evidence rather than theoretical risks, suggesting that the bill’s focus on large models might overlook risks associated with smaller yet potentially dangerous AI systems.

Future Considerations in AI Regulation

Despite vetoing SB 1047, Governor Newsom signaled a commitment to AI safety and regulation by appointing an expert committee to explore how California can balance industry growth with public safety. This committee will include notable experts to advise on best practices for AI governance moving forward.

The debate surrounding SB 1047 underscores the complexity of regulating emerging technologies and the need for a coherent framework that addresses both innovation and safety. As AI technologies continue to evolve, so too must the regulations that govern their development and deployment, ensuring that they serve the public interest without stifling progress.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...