California Enacts the Transparency in Frontier Artificial Intelligence Act (SB 53)
On September 29, 2025, Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first state to require public, standardized safety disclosures from developers of advanced “frontier” artificial intelligence (AI) models. This landmark legislation arises in the absence of comprehensive federal legislation and aligns California with Colorado and Texas in advancing state-level AI governance.
Importance of TFAIA
The TFAIA mandates public disclosures before deployment, timely reporting to the State concerning serious safety issues, and strong internal governance and whistleblower protections at qualifying developers. The statute sets a baseline for U.S. safety transparency, closely tracking emerging federal and international standards. Companies that train or modify large-scale models or partner with such developers must prioritize compliance with SB 53.
Applicability of TFAIA
TFAIA applies to developers of “frontier AI models”, defined as foundation models trained using more than 1026 integer or floating-point operations (FLOPs). As of 2025, only a small number of models are estimated to have reached this threshold. Projections suggest approximately 10 models will meet or exceed the 1026 FLOP threshold by 2026, with rapid growth anticipated as training budgets and clusters expand. The law imposes additional requirements on large frontier developers whose annual revenue exceeds $500,000,000.
Requirements for Frontier Developers
Frontier AI Framework (Large Frontier Developers)
Large frontier developers must publish and maintain a written frontier AI framework that incorporates national and international standards. This framework must set and evaluate thresholds for capabilities that could pose catastrophic risk, apply mitigations, and test their effectiveness. The framework must also describe cybersecurity measures to secure unreleased model weights and the process for identifying and responding to critical safety incidents.
Deployment-time Transparency Reports (All Frontier Developers)
Before deploying a new frontier model or a substantially modified version, developers are required to publish a transparency report. This report must identify the model’s release date, supported languages, intended uses, and general use restrictions. Large frontier developers must additionally summarize their catastrophic-risk assessments and disclose results.
Critical Safety Incident Reporting (All Frontier Developers)
Developers must report qualifying critical safety incidents to the California Office of Emergency Services (OES) within 15 days of discovery. If an incident poses an imminent risk of death or serious physical injury, developers must report it within 24 hours to a public-safety authority.
Internal-Use Risk Summaries (Large Frontier Developers)
Large frontier developers must provide periodic summaries to OES regarding assessments of catastrophic risk from internal use of frontier models.
Truthfulness and Narrow Redactions
The statute prohibits materially false or misleading statements about catastrophic risk. Public disclosures may be redacted only to protect trade secrets, public safety, or to comply with laws.
Whistleblower Protections and Internal Channels
The law includes new provisions prohibiting retaliation against employees who raise concerns about catastrophic risks. Large frontier developers must establish anonymous reporting channels with regular updates.
Enforcement and Penalties
Only the California Attorney General may initiate civil actions, with penalties reaching up to $1,000,000 per violation, scaled by severity.
State Infrastructure and Local Preemption
SB 53 establishes a consortium to design CalCompute, a state-backed public cloud cluster to support AI research. The statute also preempts local measures regulating frontier developers’ management of catastrophic risk adopted after January 1, 2025.
Implementation Timeline
Core obligations of publication, reporting, truthfulness, and whistleblower protections will commence on January 1, 2026. Reporting by OES and the Attorney General will begin on January 1, 2027.
Other California AI Regulations
Additional regulations, such as AB 2013 (training-data transparency) and SB 942 (AI content transparency), will also take effect on January 1, 2026.
Conclusion
Frontier developers are urged to assess their models against the 1026 FLOPs threshold and prepare to comply with the new requirements. This focus on catastrophic-risk management positions TFAIA as a potential baseline for AI safety transparency across the United States.