California’s Landmark Law on Frontier AI Regulation

California Transparency in Frontier AI Act Signed Into Law

Key Point: California enacts first-in-the-nation law focused on regulating frontier artificial intelligence models.

On September 29, 2025, California Governor Gavin Newsom signed SB 53 — the Transparency in Frontier Artificial Intelligence Act (TFAIA) — into law. The law mandates large artificial intelligence (AI) developers to publish safety frameworks, disclose specified transparency reports, and report critical safety incidents to the Office of Emergency Services (OES). This legislation also introduces enhanced whistleblower protections for employees reporting AI safety violations and establishes a consortium to design a framework for CalCompute, a public cloud platform aimed at expanding safe and equitable AI research.

Background

TFAIA succeeds last year’s SB 1047 (the Safe and Secure Innovation for Frontier AI Models Act), which was vetoed by Newsom due to concerns about its potential negative impact on California’s AI economy. The TFAIA was passed with substantial support in both legislative chambers and quickly signed into law by Newsom.

Scope of Law

The TFAIA applies to large frontier developers, defined as those who train or initiate the training of a foundation AI model using a computing power greater than 1026 operations and have an annual gross revenue exceeding $500 million. This definition likely limits applicability to large tech corporations while acknowledging the benefits of AI development to Californians and the economy. The law aims to regulate catastrophic risk, which refers to incidents that could lead to significant loss of life or property damage.

Transparency Obligations

AI Framework

The law establishes obligations for large frontier developers to adopt and disclose safety protocols to mitigate catastrophic risks. Developers must publish a detailed frontier AI framework on their website, including:

  • Risk assessments for catastrophic scenarios
  • Steps to mitigate those risks
  • Internal governance structures
  • Cybersecurity measures
  • Incident response plans

Frameworks must be updated annually and made public within 30 days of any changes. False or misleading claims regarding risks or adherence to the framework are prohibited.

Transparency Report

Developers must release a transparency report whenever a new or substantially altered frontier model is deployed, detailing the model’s uses, limitations, and catastrophic risk assessments.

Reporting Obligations

TFAIA requires large frontier developers to submit summaries of their catastrophic risk assessments to the OES every three months and mandates reporting of critical safety incidents within specified timeframes.

Critical Safety Reporting

The OES will establish a system for reporting critical safety incidents, requiring disclosure within 15 days, and within 24 hours for incidents posing imminent risk.

Recommendations for Updates

Starting January 1, 2027, the Department of Technology will assess technological developments to update definitions of frontier model and large frontier developer. The California attorney general will also submit annual reports regarding whistleblower activities.

Enforcement

Violations of TFAIA can result in civil penalties up to $1 million per instance, particularly for failure to publish required documents or making false statements.

CalCompute

TFAIA establishes a consortium to create CalCompute, a public cloud computing cluster designed to foster safe and equitable AI development. A comprehensive report on this initiative is due by January 1, 2027.

Whistleblower Protections

The law prohibits retaliation against employees reporting safety risks and mandates annual notification of their rights as whistleblowers. Developers must also establish processes for internal reporting of risks.

More Insights

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...

Transforming Higher Education with AI: Strategies for Success

Artificial intelligence is transforming higher education by enhancing teaching, learning, and operations, providing personalized support for student success and improving institutional resilience. As...

AI Governance for Sustainable Growth in Africa

Artificial Intelligence (AI) is transforming various sectors in Africa, but responsible governance is essential to mitigate risks such as bias and privacy violations. Ghana's newly launched National...

AI Disruption: Preparing for the Workforce Transformation

The AI economic transformation is underway, with companies like IBM and Salesforce laying off employees in favor of automation. As concerns about job losses mount, policymakers must understand public...

Accountability in the Age of AI Workforces

Digital labor is increasingly prevalent in the workplace, yet there are few established rules governing its use. Executives face the challenge of defining operational guidelines and responsibilities...

Anthropic Launches Petri Tool for Automated AI Safety Audits

Anthropic has launched Petri, an open-source AI safety auditing tool that automates the testing of large language models for risky behaviors. The tool aims to enhance collaboration and standardization...

EU AI Act and GDPR: Finding Common Ground

The EU AI Act is increasingly relevant to legal professionals, drawing parallels with the GDPR in areas such as risk management and accountability. Both regulations emphasize transparency and require...