California’s Groundbreaking AI Transparency Law Takes Effect

California Enacts the Transparency in Frontier Artificial Intelligence Act (SB 53)

On September 29, 2025, Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first state to require public, standardized safety disclosures from developers of advanced “frontier” artificial intelligence (AI) models. This landmark legislation arises in the absence of comprehensive federal legislation and aligns California with Colorado and Texas in advancing state-level AI governance.

Importance of TFAIA

The TFAIA mandates public disclosures before deployment, timely reporting to the State concerning serious safety issues, and strong internal governance and whistleblower protections at qualifying developers. The statute sets a baseline for U.S. safety transparency, closely tracking emerging federal and international standards. Companies that train or modify large-scale models or partner with such developers must prioritize compliance with SB 53.

Applicability of TFAIA

TFAIA applies to developers of “frontier AI models”, defined as foundation models trained using more than 1026 integer or floating-point operations (FLOPs). As of 2025, only a small number of models are estimated to have reached this threshold. Projections suggest approximately 10 models will meet or exceed the 1026 FLOP threshold by 2026, with rapid growth anticipated as training budgets and clusters expand. The law imposes additional requirements on large frontier developers whose annual revenue exceeds $500,000,000.

Requirements for Frontier Developers

Frontier AI Framework (Large Frontier Developers)

Large frontier developers must publish and maintain a written frontier AI framework that incorporates national and international standards. This framework must set and evaluate thresholds for capabilities that could pose catastrophic risk, apply mitigations, and test their effectiveness. The framework must also describe cybersecurity measures to secure unreleased model weights and the process for identifying and responding to critical safety incidents.

Deployment-time Transparency Reports (All Frontier Developers)

Before deploying a new frontier model or a substantially modified version, developers are required to publish a transparency report. This report must identify the model’s release date, supported languages, intended uses, and general use restrictions. Large frontier developers must additionally summarize their catastrophic-risk assessments and disclose results.

Critical Safety Incident Reporting (All Frontier Developers)

Developers must report qualifying critical safety incidents to the California Office of Emergency Services (OES) within 15 days of discovery. If an incident poses an imminent risk of death or serious physical injury, developers must report it within 24 hours to a public-safety authority.

Internal-Use Risk Summaries (Large Frontier Developers)

Large frontier developers must provide periodic summaries to OES regarding assessments of catastrophic risk from internal use of frontier models.

Truthfulness and Narrow Redactions

The statute prohibits materially false or misleading statements about catastrophic risk. Public disclosures may be redacted only to protect trade secrets, public safety, or to comply with laws.

Whistleblower Protections and Internal Channels

The law includes new provisions prohibiting retaliation against employees who raise concerns about catastrophic risks. Large frontier developers must establish anonymous reporting channels with regular updates.

Enforcement and Penalties

Only the California Attorney General may initiate civil actions, with penalties reaching up to $1,000,000 per violation, scaled by severity.

State Infrastructure and Local Preemption

SB 53 establishes a consortium to design CalCompute, a state-backed public cloud cluster to support AI research. The statute also preempts local measures regulating frontier developers’ management of catastrophic risk adopted after January 1, 2025.

Implementation Timeline

Core obligations of publication, reporting, truthfulness, and whistleblower protections will commence on January 1, 2026. Reporting by OES and the Attorney General will begin on January 1, 2027.

Other California AI Regulations

Additional regulations, such as AB 2013 (training-data transparency) and SB 942 (AI content transparency), will also take effect on January 1, 2026.

Conclusion

Frontier developers are urged to assess their models against the 1026 FLOPs threshold and prepare to comply with the new requirements. This focus on catastrophic-risk management positions TFAIA as a potential baseline for AI safety transparency across the United States.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...