OpenAI Faces Scrutiny Over Compliance with California AI Safety Law

OpenAI Disputes Allegations of Violating California’s AI Safety Law

OpenAI is currently embroiled in controversy regarding its compliance with California’s new AI safety law following the release of its latest coding model, GPT-5.3-Codex. An AI watchdog group has asserted that OpenAI may have violated the provisions of this law, which could expose the company to substantial fines.

Background of the Allegations

The Midas Project, the AI watchdog in question, claims that OpenAI failed to adhere to its own safety commitments, now legally binding under California law. This law, known as SB 53, mandates that major AI companies publish and adhere to safety frameworks designed to prevent catastrophic risks defined as incidents resulting in more than 50 deaths or $1 billion in property damage.

OpenAI’s release of the GPT-5.3-Codex model last week has raised significant cybersecurity concerns as it has been classified in the “high” risk category for cybersecurity under the company’s internal risk classification system. CEO Sam Altman acknowledged that the model’s capabilities could facilitate significant cyber harm if used at scale.

OpenAI’s Defense

In response to the allegations, an OpenAI spokesperson asserted the company’s confidence in its compliance with frontier safety laws, including SB 53. The spokesperson argued that the Midas Project misinterpreted the safety framework’s wording, describing it as ambiguous. OpenAI maintains that additional safeguards were unnecessary for GPT-5.3-Codex, as it does not possess the long-range autonomy required for such measures.

The spokesperson emphasized that the model had undergone a comprehensive testing and governance process, as detailed in the publicly released system card. They noted that it did not demonstrate long-range autonomy capabilities based on evaluations from internal experts.

Dispute Over Compliance

However, some safety researchers contest OpenAI’s interpretation. Nathan Calvin from Encode criticized the company’s stance, suggesting that they are evading responsibility for not updating their safety plan prior to the release. The Midas Project further argues that OpenAI cannot conclusively prove the absence of autonomy required for heightened safeguards.

Tyler Johnston, founder of the Midas Project, pointed out the irony of the situation, stating that the potential violation is especially concerning given the low threshold SB 53 sets for compliance. He noted that companies are merely required to adopt a voluntary safety plan and communicate honestly about it.

Potential Consequences

If investigations confirm the allegations, OpenAI could face significant penalties under SB 53, potentially amounting to millions of dollars based on the severity and duration of any noncompliance. A representative from the California Attorney General’s Office indicated a commitment to enforcing relevant laws aimed at increasing transparency and safety in the AI sector, although they could not comment on ongoing investigations.

As the situation unfolds, it serves as a critical test case for the enforcement of California’s AI safety regulations and highlights the importance of adherence to safety commitments in the rapidly evolving AI landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...