California’s New Initiative for AI Safety Standards

New State Bill Aims to Spur AI Safety Standards

With increasing scrutiny on the potential dangers of artificial intelligence systems, California is advancing legislation to establish safety standards for AI. The state Senate has passed a bill to create a new commission responsible for officially recognizing private third-party organizations that develop safety standards and evaluate specific AI models.

Legislative Background

Senate Bill 813, authored by Sen. Jerry McNerney, addresses the pressing need for regulations to ensure AI safety. McNerney emphasizes that while government rulemaking is often slow, independent standards bodies can effectively keep pace with AI technology. “This is a tried-and-true approach to public safety,” he states.

In the past, similar regulatory attempts faced pushback from the tech industry. For example, SB 1047, proposed by Sen. Scott Wiener, aimed to require AI developers to assess risks associated with their models but was ultimately vetoed by Gov. Gavin Newsom due to concerns over the lack of widely accepted standards.

Establishment of the California AI Standards and Safety Commission

SB 813 does not set specific standards but establishes the California AI Standards and Safety Commission. This commission, housed within the governor’s office, oversees organizations that develop and apply AI standards, called independent verification organizations (IVOs).

To gain official recognition, IVOs must submit plans detailing how they will evaluate AI developers and deployers to mitigate safety risks. These plans must include:

  • A description of auditing procedures for AI models to ensure adherence to best practices.
  • Definitions of acceptable levels of risk.
  • Protocols to monitor AI models post-evaluation.
  • Plans to direct developers to rectify issues when mitigation measures fail.
  • Protocols for revoking certifications if corrective actions are not taken promptly.

Voluntary Standards and Market Implications

The standards established by SB 813 are voluntary, meaning developers and deployers are not required to have their models evaluated. However, certification from an approved organization could serve as a significant marketplace advantage, acting as a “stamp of approval” for compliant AI technologies.

McNerney believes this system will encourage private industry to adopt AI safety standards, meeting public demands for accountability and safety in AI applications. “It’s pretty clear something needs to be done,” he asserts.

Concerns and Industry Response

Despite its intentions, SB 813 has faced criticism within the tech industry. Robert Boykin, executive director of TechNet, an industry lobbying group, argues the bill introduces uncertainty without enhancing safety, citing undefined standards and a lack of clear incentives for participation.

As the bill moves through the legislative process, it remains unclear when the Assembly will address it or if Newsom will sign it into law. The Senate approved the measure with considerable bipartisan support, indicating growing recognition of the need for AI safety standards.

Conclusion

As the federal government contemplates its role in AI regulation, California’s SB 813 could set a precedent for state-level AI safety oversight. McNerney stresses the necessity of such standards as public demand for safe AI continues to rise.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...