AI Trends for 2026 – Return of the Brussels Effect: AI Transparency Requirements Come to California
Recently enacted AI transparency regulations took effect in California on January 1, 2026. For enterprises with a global footprint, these California regulations bear a striking resemblance to transparency obligations under the EU AI Act. Although these laws differ from EU law, examining them together enables enterprises to implement globally applicable AI compliance programs.
Key California AI Transparency Obligations
Notable California AI transparency obligations include:
- SB 243: Companion Chatbots
This legislation requires that deployers of companion chatbots provide a clear and conspicuous notification that the companion chatbot is not human if a reasonable person would be misled. SB 243’s obligations resemble Article 50 of the EU AI Act, which mandates that providers inform individuals that they are interacting with an AI system.
- AB 2013: Generative Artificial Intelligence Training Data Transparency
This requirement mandates developers of generative AI systems to publish documentation describing their training data, including sources of datasets, types of data points, and the use of synthetic data in development. Article 53 of the EU AI Act imposes a similar requirement, obligating general-purpose AI model providers to publish a detailed summary of the content used for training the model.
- SB 942: California AI Transparency Act
This act requires covered providers to offer an AI detection tool that enables users to assess whether content was created by the provider’s generative AI system. Covered providers must include a latent disclosure in AI-generated content and provide users with the option of including a manifest disclosure in their content. Article 50 of the EU AI Act requires that providers of AI systems generating synthetic content ensure that outputs are marked as artificially generated. Originally scheduled to take effect on January 1, 2026, AB 853 has delayed the effective date of the California AI Transparency Act until August 2, 2026, aligning it with corresponding requirements under the EU AI Act. However, the EU has more recently proposed delaying these requirements until 2027—it remains to be seen whether California will do the same.
- SB 53: Transparency in Frontier Artificial Intelligence Act
This legislation requires that frontier developers—regardless of size—publish transparency reports when deploying a new or modified frontier model. Large frontier developers must also publish a “frontier AI framework” describing how they incorporate industry-consensus best practices and mitigate potential catastrophic risks. These requirements align with the risk-based obligations and considerations of industry standards set forth in EU AI Act Articles 6, 56, 67, and 95.
Implications for AI Developers and Deployers
While these California disclosure requirements impose obligations on enterprises, they also enable AI developers and deployers to benchmark against others in the industry. AI developers already exchange safety-related best practices through forums like the Frontier Model Forum. For AI deployers, these transparency requirements may provide new insights into the tools and safety practices employed by developers and other deployers. As these compliance practices become publicly available, AI developers and deployers should monitor evolving trends in their industries.