OpenAI Disputes Allegations of Violating California’s AI Safety Law
OpenAI is currently embroiled in controversy regarding its compliance with California’s new AI safety law following the release of its latest coding model, GPT-5.3-Codex. An AI watchdog group has asserted that OpenAI may have violated the provisions of this law, which could expose the company to substantial fines.
Background of the Allegations
The Midas Project, the AI watchdog in question, claims that OpenAI failed to adhere to its own safety commitments, now legally binding under California law. This law, known as SB 53, mandates that major AI companies publish and adhere to safety frameworks designed to prevent catastrophic risks defined as incidents resulting in more than 50 deaths or $1 billion in property damage.
OpenAI’s release of the GPT-5.3-Codex model last week has raised significant cybersecurity concerns as it has been classified in the “high” risk category for cybersecurity under the company’s internal risk classification system. CEO Sam Altman acknowledged that the model’s capabilities could facilitate significant cyber harm if used at scale.
OpenAI’s Defense
In response to the allegations, an OpenAI spokesperson asserted the company’s confidence in its compliance with frontier safety laws, including SB 53. The spokesperson argued that the Midas Project misinterpreted the safety framework’s wording, describing it as ambiguous. OpenAI maintains that additional safeguards were unnecessary for GPT-5.3-Codex, as it does not possess the long-range autonomy required for such measures.
The spokesperson emphasized that the model had undergone a comprehensive testing and governance process, as detailed in the publicly released system card. They noted that it did not demonstrate long-range autonomy capabilities based on evaluations from internal experts.
Dispute Over Compliance
However, some safety researchers contest OpenAI’s interpretation. Nathan Calvin from Encode criticized the company’s stance, suggesting that they are evading responsibility for not updating their safety plan prior to the release. The Midas Project further argues that OpenAI cannot conclusively prove the absence of autonomy required for heightened safeguards.
Tyler Johnston, founder of the Midas Project, pointed out the irony of the situation, stating that the potential violation is especially concerning given the low threshold SB 53 sets for compliance. He noted that companies are merely required to adopt a voluntary safety plan and communicate honestly about it.
Potential Consequences
If investigations confirm the allegations, OpenAI could face significant penalties under SB 53, potentially amounting to millions of dollars based on the severity and duration of any noncompliance. A representative from the California Attorney General’s Office indicated a commitment to enforcing relevant laws aimed at increasing transparency and safety in the AI sector, although they could not comment on ongoing investigations.
As the situation unfolds, it serves as a critical test case for the enforcement of California’s AI safety regulations and highlights the importance of adherence to safety commitments in the rapidly evolving AI landscape.