California’s Landmark AI Law Demands Transparency From Leading AI Developers
On September 29, 2025, California Governor Gavin Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). This landmark legislation represents California’s most significant regulation to date of AI developers.
California is the first state in the country to require large AI developers to disclose publicly a safety framework that incorporates widely accepted safety standards and explains a model’s capacity to pose, and mitigate, “catastrophic risks.” The law also forces model developers, for the first time, to release transparency reports on a model’s intended uses and restrictions, and it mandates large developers summarize their assessments of a model’s catastrophic risks. In addition, the TFAIA breaks new ground by requiring developers to report to the government “critical safety incidents”; by providing whistleblower protections for model developers’ employees; and by establishing a consortium to create “CalCompute,” a public cloud computing cluster.
Sacramento’s move to impose state-level oversight of AI model developers is at odds with recent federal actions. That the state with the nation’s largest economy and most AI companies is going one direction on AI regulation while the federal government is going another may complicate industry’s efforts at compliance.
TFAIA’s Narrow But Expanding Applicability
The TFAIA imposes its requirements on a small but growing number of companies. The law applies to “frontier developer[s]”—entities that have trained or are training a “frontier model”—and it applies additional requirements on “large frontier developer[s],” which are frontier developers that have annual gross revenue exceeding $500 million in the prior calendar year. The law defines a “frontier model” by focusing on the amount of computing power that was used to train, fine-tune, or modify the model, namely “a quantity of computing power greater than 10^26 integer or floating-point operations (FLOP).”
Critically, few models today meet the high technical threshold to be a “frontier model,” but the trendlines suggest many more will soon. According to one analysis, if trends hold, there will be “around 30 such models [that use over 10^26 FLOP] by the start of 2027, and over 200 models by the start of 2030.”
The law directs the California Department of Technology to review the definitions of “frontier model,” “frontier developer,” and “large frontier developer” annually and to submit recommendations to the legislature for updates.
The TFAIA requires regulated entities to act against “catastrophic risks,” which it defines as an incident that causes death or serious injury to more than 50 people or more than $1 billion in damages that involves a frontier model doing any of the following: providing expert-level assistance in the creation or release of weapons of mass destruction; engaging in cyberattacks, murder, or similar crimes; or evading the control of the developer.
Nothing in the law explicitly limits its applicability to frontier developers based in California, and one expects California will attempt to enforce the law on companies that sell their AI products in the state, regardless of their origin, much as California applies its laws to other businesses that have sufficient contacts with the state.
Required Disclosures of Frontier Developers
The TFAIA places meaningful regulatory burdens on frontier developers and large frontier developers by mandating transparency in several ways, including the obligation to disclose safety frameworks and risk assessments to the public.
Whistleblower Protections
The TFAIA establishes whistleblower protections for those working for frontier developers, aiming to encourage the reporting of safety concerns without fear of retaliation.
Public Cloud Computing Cluster
In addition to regulating the development of frontier models, the law promotes the creation of public infrastructure to support AI research moving forward.
California May Serve as a Model for Other States and Jurisdictions
California is the first, but perhaps not the last, state to regulate AI developers specifically. In New York, a bill awaiting Governor Kathy Hochul’s signature, the Responsible AI Safety and Education (RAISE) Act, would require developers of frontier AI models to create and maintain safety and security protocols, report significant safety incidents to the state, evaluate their models and withhold any model that poses an “unreasonable risk of critical harm,” such as mass casualties or significant economic damage, and enlist a third-party to perform a yearly, independent audit of the developer’s compliance with the law.
Other states have enacted AI laws that regulate AI when it is used in particular ways. For example, the Colorado Artificial Intelligence Act, which will go into effect on June 30, 2026, imposes various obligations related to documentation, disclosures, and governance of “high-risk” AI systems—systems that make “consequential decisions” relating to education, employment, health care, and similar areas.
Several additional laws in California will also go into effect on January 1, 2026, including the AI Training Data Transparency (AB 2013), which requires AI developers and those who “substantially modify” or tune AI models to post publicly details on the data on which their models were trained.
California joins jurisdictions like the European Union, Japan, and South Korea in advancing elements of a risk-based regulatory framework for high-impact AI systems, bringing special regulatory focus to the prospect of catastrophic risk in the U.S. context. The passage of the TFAIA also comes on the heels of the United Nations’ launch of the Global Dialogue on Artificial Intelligence Governance, announced in late September, which emphasizes alignment and cooperation on policy, science, and capacity building on AI within UN circles.
Next Steps
Frontier developers and large frontier developers should review the TFAIA and consult with counsel to assess the law’s impact on their operations. These entities may seek to assess their current transparency practices; draft frontier AI frameworks and transparency reports, where appropriate; formalize processes to report critical safety incidents; assess catastrophic risks and enhance internal documentation; update whistleblower policies and notify workers of whistleblower protections; track California’s designation of compliant federal, incident-reporting policies, if any; and monitor developments related to CalCompute.