California’s Blueprint for Regulating Foundation AI Models

California Frontier AI Working Group Issues Report on Foundation Model Regulation

On March 18, the Joint California Policy Working Group on AI Frontier Models (the “Working Group”) released its draft report on the regulation of foundation models. The aim is to provide an “evidence-based foundation for AI policy decisions” in California that “ensures these powerful technologies benefit society globally while reasonably managing emerging risks.” This initiative was established by California Governor Gavin Newsom in September 2024 following the veto of the Safe & Secure Innovation for Frontier AI Models Act (SB 1047).

Background and Context

The Working Group builds on California’s partnership with Stanford University and the University of California, Berkeley, as established by Governor Newsom’s 2023 Executive Order on generative AI. The report underscores that foundation model capabilities have rapidly improved since the veto of SB 1047 and emphasizes the unique opportunity California has to shape AI governance, which “may not remain open indefinitely.”

Key Components for Foundation Model Regulation

The report identifies several critical components for effective regulation of foundation models:

Transparency Requirements

The report finds that foundation model transparency requirements are a “necessary foundation” for AI regulation. It recommends that policymakers prioritize public-facing transparency to advance accountability. The report specifically suggests transparency requirements focusing on five categories of information:

  1. Training data acquisition
  2. Developer safety practices
  3. Developer security practices
  4. Pre-deployment testing by developers and third parties
  5. Downstream impacts, including disclosures from entities that host foundation models for download or use

Third-Party Risk Assessments

Recognizing that transparency alone may be insufficient, the report emphasizes the need for third-party risk assessments. These assessments are deemed essential to create incentives for developers to enhance model safety. The report advocates for establishing safe harbors that indemnify public interest safety research and recommends routing mechanisms for swiftly communicating identified vulnerabilities to developers and affected parties.

Whistleblower Protections

The necessity for whistleblower protections for employees and contractors of foundation model developers is also highlighted. The report advises policymakers to consider protections that cover a broader range of AI developer activities, such as failures to adhere to a company’s AI safety policy, even if reported conduct does not violate existing laws.

Adverse Event Reporting Requirements

The report identifies adverse event reporting as a “critical first step” in assessing the costs and benefits of AI regulation. It recommends that foundation model reporting systems:

  1. Provide reports to relevant agencies with the authority to address identified harms, with discretion to share anonymized findings with industry stakeholders.
  2. Use initially narrow adverse event reporting criteria focused on a tightly-defined set of harms that can be revised over time.
  3. Adopt a hybrid approach combining mandatory reporting requirements for critical “parts of the AI stack” with voluntary reporting from downstream users.

Foundation Model Regulation Thresholds

Various options for defining thresholds that would trigger foundation model requirements are assessed, including:

  • Developer-level thresholds (e.g., a developer’s employee headcount)
  • Cost-level thresholds (e.g., compute-related costs of model training)
  • Model-level thresholds based on performance on key benchmarks
  • Impact-level thresholds based on the number of commercial users of the model

The report finds that “compute thresholds,” such as the EU AI Act’s threshold of 1025 floating-point operations per second (FLOPS) for model training, are currently the most attractive cost-level thresholds that should be used in combination with other metrics. Caution is advised against customary developer-level metrics, such as employee headcount, which do not consider the specifics of the AI industry and its associated technology.

Legislative Implications

The ongoing public comment process and the report will inform lawmakers as they consider AI legislation during the 2025 legislative session. This includes SB 53, a foundation model whistleblower bill introduced by Senator Wiener. Other states, including Colorado, Illinois, Massachusetts, New York, Rhode Island, and Vermont, are also considering foundation model legislation. For example, the New York Responsible AI Safety & Education (RAISE) Act would impose transparency, disclosure, documentation, and third-party audit requirements on certain developers of AI models meeting its compute and cost thresholds.

The Working Group is actively seeking public input on the report, with responses due by April 8, 2025. The final version of the report is expected to be released by June 2025, ahead of the California legislature’s adjournment in September.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...