California’s Groundbreaking AI Safety Disclosure Law

Transparency in Frontier Artificial Intelligence Act (SB-53)

On September 29, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first state to require public, standardized safety disclosures from developers of advanced artificial intelligence (AI) models.

In the absence of comprehensive federal legislation addressing AI safety, California is leading the way among states seeking to regulate AI safety issues. Notably, Colorado was the first state to pass a broad AI law, the Colorado AI Act, which imposes substantive disclosure, risk management, and transparency practices on developers of high-risk AI systems. However, the implementation of that law has been delayed until June 2026. Similarly, Texas passed the Texas Responsible AI Governance Act in June 2025, imposing limitations on AI deployment and development, yet its scope is narrower than Colorado’s law.

Overview of TFAIA

TFAIA requires developers to disclose how they manage safety risks, introducing mechanisms for transparency, accountability, and enforcement. Developers not in compliance with the law when it takes effect in January 2026 face civil penalties of up to $1,000,000 per violation, enforced by the California Attorney General.

Whom it Covers

TFAIA applies to developers of frontier AI models, defined as foundation models trained on a quantity of computing power greater than 10^26 FLOPs (floating-point operations), including all computing power used in subsequent fine-tuning or modifications. This threshold aligns with the 2023 AI Executive Order and exceeds the EU AI Act’s threshold of 10^25 FLOPs. To date, few companies have publicly disclosed compliance with this threshold, but more are expected to meet it in the coming year.

The law imposes additional transparency requirements on large frontier developers whose annual revenue exceeds $500,000,000.

Key Requirements

Developers must publish an accessible general safety framework that demonstrates how they incorporate national and international standards, assess catastrophic risks, and implement mitigation strategies. This framework must also include cybersecurity practices to secure unreleased model weights from unauthorized modifications. Developers are required to review their frameworks annually and publish any material modifications within 30 days.

When releasing a new or substantially modified frontier model, developers must publish a transparency report detailing the model’s release date, intended uses, and any restrictions on deployment. They must also summarize their catastrophic-risk assessments and disclose the role of third-party evaluators.

Reporting Critical Safety Incidents

TFAIA mandates that frontier developers notify the California Governor’s Office of Emergency Services (OES) of any critical safety incident—defined as model behavior that risks death, serious injury, or loss of control—within 15 days. If the incident poses an imminent risk, disclosure must occur within 24 hours. OES will establish a reporting portal for both public and confidential submissions of such incidents and will publish anonymized annual summaries starting in 2027.

Whistleblower Protections

The law establishes strong whistleblower protections for employees of frontier developers, prohibiting retaliation and requiring anonymous reporting channels. The California Attorney General will publish anonymized annual reports on whistleblower activities beginning in 2027.

Formation of the “CalCompute” Consortium

TFAIA directs the establishment of a consortium to create a state-backed public cloud compute cluster, CalCompute, providing advanced computing capabilities for researchers and universities. By January 1, 2027, the consortium must report to the California Legislature with details on its proposed design and governance.

Ongoing Updates to the Law

The law recognizes that AI technology is constantly evolving and directs California’s Department of Technology to review the definitions of “frontier model” and “large frontier developer” annually. This will ensure that California’s definitions align with international and federal standards. The law also acknowledges that foundation models from smaller companies may pose significant catastrophic risks, suggesting the need for future legislation.

Several controversial features of a previous AI safety bill were omitted from TFAIA, including mandatory third-party audits and pre-launch testing requirements. Instead, TFAIA emphasizes transparency and accountability over pre-approval and direct control.

Future Outlook

Other AI-specific statutes are set to take effect in California in 2025 and 2026, including transparency mandates requiring developers to disclose training data and embed invisible watermarks in AI-generated content. Additionally, Congress is debating a federal “moratorium” that could impact state AI legislation. The rapid pace of state AI laws is anticipated to increase, with over 100 bills enacted across the country in recent legislative sessions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...