California’s New AI Law Exposes the Real Compliance Problem
Frontier AI companies have spent the last decade operating at full speed with very few guardrails. Regulations have long been rumored but have been slow to arrive, creating an atmosphere in which compliance is a promise developers have never needed to keep.
However, a new California law signed by Governor Gavin Newsom on September 29, 2025, is changing that landscape. The Transparency in Frontier Artificial Intelligence Act (TFAIA) establishes a regulatory framework that aims to “build public trust while also continuing to spur innovation in these new technologies.” Even companies that do not train models themselves will feel the impact, as the vendors they rely on will now be subject to the same standards.
A Change in Course for Frontier Developers
Previous concerns regarding AI compliance were primarily focused on use cases. Lawmakers warned about how AI could be applied and where it had the potential to cause harm, such as inequity in loan decisions. TFAIA shifts this focus, aiming controls at capability rather than use case. The law’s chief concern is not where AI will be used but how much power it brings to an application.
Think of it as policing drivers based on how much horsepower they have rather than their behavior in traffic. Under this new regulatory landscape, frontier developers operating at high compute levels will face oversight before their models are deployed. Those whose models exceed a compute power threshold of greater than 1026 FLOPs must implement frameworks designed to guard against systems that could cause catastrophic damage.
A Call to Operationalize AI Ethics
In the past, the discussion around AI ethics was overshadowed by safety concerns. Safety is straightforward, akin to equipping cars with airbags, while ethics involves complex trade-offs and requires extensive collaboration. TFAIA signals a return to prioritizing ethics in AI development, demanding that developers operationalize AI ethics.
Crafting a Compliant-by-Design Culture
The expectations for frontier developers under TFAIA are high. Each developer must “write, implement, comply with, and clearly publish a frontier AI framework” that outlines how they assess and mitigate catastrophic risks. Furthermore, developers must report any critical safety incidents, defined as real-world harm triggered by unauthorized access to AI models, to California’s Office of Emergency Services. Failure to report incurs civil penalties, and whistleblowers are protected under the law.
A compliant-by-design culture necessitates treating TFAIA’s requirements as an operating manual for AI development. Companies that embed accountability into their processes will find compliance easier. Rejecting a reactive approach to safety is also imperative, requiring frontier companies to adopt a slower pace, allowing for thorough testing before deployment.
Transparency is crucial; intended use, known limitations, and assessed risks must not be hidden. Every employee should feel empowered to identify and report issues.
The TFAIA places AI ethics at the forefront of the marketplace, obligating frontier developers to take responsibility for the power of their technologies by fostering cultures that prioritize accountability alongside innovation.