California’s Groundbreaking AI Safety Disclosure Law

Transparency in Frontier Artificial Intelligence Act (SB-53)

On September 29, California Governor Gavin Newsom signed into law the Transparency in Frontier Artificial Intelligence Act (TFAIA), making California the first state to require public, standardized safety disclosures from developers of advanced artificial intelligence (AI) models.

In the absence of comprehensive federal legislation addressing AI safety, California is leading the way among states seeking to regulate AI safety issues. Notably, Colorado was the first state to pass a broad AI law, the Colorado AI Act, which imposes substantive disclosure, risk management, and transparency practices on developers of high-risk AI systems. However, the implementation of that law has been delayed until June 2026. Similarly, Texas passed the Texas Responsible AI Governance Act in June 2025, imposing limitations on AI deployment and development, yet its scope is narrower than Colorado’s law.

Overview of TFAIA

TFAIA requires developers to disclose how they manage safety risks, introducing mechanisms for transparency, accountability, and enforcement. Developers not in compliance with the law when it takes effect in January 2026 face civil penalties of up to $1,000,000 per violation, enforced by the California Attorney General.

Whom it Covers

TFAIA applies to developers of frontier AI models, defined as foundation models trained on a quantity of computing power greater than 10^26 FLOPs (floating-point operations), including all computing power used in subsequent fine-tuning or modifications. This threshold aligns with the 2023 AI Executive Order and exceeds the EU AI Act’s threshold of 10^25 FLOPs. To date, few companies have publicly disclosed compliance with this threshold, but more are expected to meet it in the coming year.

The law imposes additional transparency requirements on large frontier developers whose annual revenue exceeds $500,000,000.

Key Requirements

Developers must publish an accessible general safety framework that demonstrates how they incorporate national and international standards, assess catastrophic risks, and implement mitigation strategies. This framework must also include cybersecurity practices to secure unreleased model weights from unauthorized modifications. Developers are required to review their frameworks annually and publish any material modifications within 30 days.

When releasing a new or substantially modified frontier model, developers must publish a transparency report detailing the model’s release date, intended uses, and any restrictions on deployment. They must also summarize their catastrophic-risk assessments and disclose the role of third-party evaluators.

Reporting Critical Safety Incidents

TFAIA mandates that frontier developers notify the California Governor’s Office of Emergency Services (OES) of any critical safety incident—defined as model behavior that risks death, serious injury, or loss of control—within 15 days. If the incident poses an imminent risk, disclosure must occur within 24 hours. OES will establish a reporting portal for both public and confidential submissions of such incidents and will publish anonymized annual summaries starting in 2027.

Whistleblower Protections

The law establishes strong whistleblower protections for employees of frontier developers, prohibiting retaliation and requiring anonymous reporting channels. The California Attorney General will publish anonymized annual reports on whistleblower activities beginning in 2027.

Formation of the “CalCompute” Consortium

TFAIA directs the establishment of a consortium to create a state-backed public cloud compute cluster, CalCompute, providing advanced computing capabilities for researchers and universities. By January 1, 2027, the consortium must report to the California Legislature with details on its proposed design and governance.

Ongoing Updates to the Law

The law recognizes that AI technology is constantly evolving and directs California’s Department of Technology to review the definitions of “frontier model” and “large frontier developer” annually. This will ensure that California’s definitions align with international and federal standards. The law also acknowledges that foundation models from smaller companies may pose significant catastrophic risks, suggesting the need for future legislation.

Several controversial features of a previous AI safety bill were omitted from TFAIA, including mandatory third-party audits and pre-launch testing requirements. Instead, TFAIA emphasizes transparency and accountability over pre-approval and direct control.

Future Outlook

Other AI-specific statutes are set to take effect in California in 2025 and 2026, including transparency mandates requiring developers to disclose training data and embed invisible watermarks in AI-generated content. Additionally, Congress is debating a federal “moratorium” that could impact state AI legislation. The rapid pace of state AI laws is anticipated to increase, with over 100 bills enacted across the country in recent legislative sessions.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...