New State Bill Aims to Spur AI Safety Standards
With increasing scrutiny on the potential dangers of artificial intelligence systems, California is advancing legislation to establish safety standards for AI. The state Senate has passed a bill to create a new commission responsible for officially recognizing private third-party organizations that develop safety standards and evaluate specific AI models.
Legislative Background
Senate Bill 813, authored by Sen. Jerry McNerney, addresses the pressing need for regulations to ensure AI safety. McNerney emphasizes that while government rulemaking is often slow, independent standards bodies can effectively keep pace with AI technology. “This is a tried-and-true approach to public safety,” he states.
In the past, similar regulatory attempts faced pushback from the tech industry. For example, SB 1047, proposed by Sen. Scott Wiener, aimed to require AI developers to assess risks associated with their models but was ultimately vetoed by Gov. Gavin Newsom due to concerns over the lack of widely accepted standards.
Establishment of the California AI Standards and Safety Commission
SB 813 does not set specific standards but establishes the California AI Standards and Safety Commission. This commission, housed within the governor’s office, oversees organizations that develop and apply AI standards, called independent verification organizations (IVOs).
To gain official recognition, IVOs must submit plans detailing how they will evaluate AI developers and deployers to mitigate safety risks. These plans must include:
- A description of auditing procedures for AI models to ensure adherence to best practices.
- Definitions of acceptable levels of risk.
- Protocols to monitor AI models post-evaluation.
- Plans to direct developers to rectify issues when mitigation measures fail.
- Protocols for revoking certifications if corrective actions are not taken promptly.
Voluntary Standards and Market Implications
The standards established by SB 813 are voluntary, meaning developers and deployers are not required to have their models evaluated. However, certification from an approved organization could serve as a significant marketplace advantage, acting as a “stamp of approval” for compliant AI technologies.
McNerney believes this system will encourage private industry to adopt AI safety standards, meeting public demands for accountability and safety in AI applications. “It’s pretty clear something needs to be done,” he asserts.
Concerns and Industry Response
Despite its intentions, SB 813 has faced criticism within the tech industry. Robert Boykin, executive director of TechNet, an industry lobbying group, argues the bill introduces uncertainty without enhancing safety, citing undefined standards and a lack of clear incentives for participation.
As the bill moves through the legislative process, it remains unclear when the Assembly will address it or if Newsom will sign it into law. The Senate approved the measure with considerable bipartisan support, indicating growing recognition of the need for AI safety standards.
Conclusion
As the federal government contemplates its role in AI regulation, California’s SB 813 could set a precedent for state-level AI safety oversight. McNerney stresses the necessity of such standards as public demand for safe AI continues to rise.