California’s AI Act Vetoed
The recent decision by California’s Governor Gavin Newsom to veto the statewide artificial intelligence regulation legislation has ignited discussions around the implications of AI policy and regulation. This act, known as SB 1047, was designed to address concerns about the risks that artificial intelligence (AI) systems pose to public safety, but ultimately did not pass.
Overview of SB 1047
SB 1047, also referred to as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was enacted by the California legislature in late August 2024. The act aimed to regulate developers of very large frontier models—those requiring significant computing power or financial investment during the training phase. The bill delineated four categories of critical harms that developers needed to prevent:
- Creation or use of weapons causing mass casualties.
- Cyberattacks on critical infrastructure leading to mass casualties or significant financial damages.
- Bodily injury or property damage that would be criminal if caused by humans.
- Other serious threats to public safety and security.
To comply with SB 1047, developers were required to implement several measures throughout the development process, including:
- Installing a “kill switch” to allow for immediate shutdown of AI systems.
- Conducting independent third-party audits for compliance.
- Reporting safety incidents within 72 hours.
Comparative Analysis with EU’s AI Act
While SB 1047 shares similarities with the European Union’s AI Act in focusing on safety and societal risks posed by AI systems, there are key differences. For instance, SB 1047 emphasizes responsibilities of developers of large frontier models rather than deployers, and it uniquely mandates the installation of a kill switch.
Support and Opposition
The act garnered significant support from AI model developers like Anthropic and prominent figures in the AI community, who viewed it as a necessary step for effective regulation. A reported 65% of Californians supported the legislation, reflecting public sentiment that AI developers should embed safety measures in their systems.
Conversely, major tech companies including Google and OpenAI opposed SB 1047, arguing that it could stifle innovation and that regulation should be handled at the federal level. Critics also included various AI researchers who were concerned about the implications for the availability of advanced models.
Governor Newsom’s Rationale for Veto
Governor Newsom expressed his concerns that SB 1047 could hinder innovation within California’s thriving AI sector, which is home to many leading AI companies. He emphasized the need for regulations grounded in empirical evidence rather than theoretical risks, suggesting that the bill’s focus on large models might overlook risks associated with smaller yet potentially dangerous AI systems.
Future Considerations in AI Regulation
Despite vetoing SB 1047, Governor Newsom signaled a commitment to AI safety and regulation by appointing an expert committee to explore how California can balance industry growth with public safety. This committee will include notable experts to advise on best practices for AI governance moving forward.
The debate surrounding SB 1047 underscores the complexity of regulating emerging technologies and the need for a coherent framework that addresses both innovation and safety. As AI technologies continue to evolve, so too must the regulations that govern their development and deployment, ensuring that they serve the public interest without stifling progress.