California’s AI Transparency Act Set for Major Changes

California’s AI Transparency Act (CAITA) May be Amended to Regulate Social Media Platforms

Last year, the California General Assembly passed the California AI Transparency Act (CAITA), which Governor Gavin Newsom signed into law on September 19, 2024. This legislation is set to take effect on January 1, 2026. However, recent developments indicate potential amendments that could significantly reshape the scope of this law.

Current Provisions of CAITA

Initially, CAITA applies exclusively to specific providers of publicly accessible generative artificial intelligence (GenAI) systems. It mandates that certain businesses producing GenAI outputs include latent disclosures within these outputs. A latent disclosure is defined as “present but not manifest,” while businesses must give users the option for “manifest disclosures,” which are more easily perceived and understood.

CAITA currently targets any entity creating or producing a GenAI system that has over 1,000,000 monthly visitors in California. Key requirements include:

  • Inclusion of latent disclosures in GenAI outputs
  • Provision of manifest disclosure options to users

It is important to note that the amendment proposed through AB 853 does not alter these provisions but extends their effective date.

Proposed Amendments Under AB 853

AB 853 aims to broaden the reach of CAITA by mandating compliance from large online platforms and manufacturers of “capture devices” (devices that record photographs, audio, or video). Large online platforms are defined as those with at least two million monthly users.

The proposed amendment requires these platforms to:

  • Detect and maintain provenance data, akin to a record of training data
  • Provide users with an interface to inspect this provenance data

In addition, capture device manufacturers must include latent disclosure information in the content captured, to the extent technically feasible. This requirement represents a more detailed and potentially burdensome obligation compared to the existing law.

Potential Impacts of Provenance Data Preservation

The amendment would impose significant operational costs on large online platforms for data storage, management, and security related to provenance data. Moreover, these platforms must ensure that latent disclosures are preserved in a manner identifiable by AI-detection tools, which could also incur additional costs.

While CAITA restricts the collection of certain personal information and limits retention of submitted content, it emphasizes the importance of maintaining only necessary provenance data, aligning with California’s stringent data privacy regulations.

Consequences of Non-Compliance

Regardless of whether AB 853 is signed into law, CAITA imposes significant penalties on entities that remove or fail to maintain provenance data. Violations can incur liabilities of $5,000 per instance, along with attorneys’ fees for prevailing plaintiffs. These penalties highlight the law’s potential to combat disinformation while also raising concerns about misuse by bad actors.

Implementation Timeline

The timeline for the provisions of CAITA is as follows:

Provisions of CAITA Effective Date
General Provisions August 2, 2026
Large Online Platforms Provisions January 1, 2027
Capture Device Manufacturers Provisions January 1, 2028

In summary, the California AI Transparency Act represents a significant step toward greater accountability and transparency in AI technology, particularly as it pertains to social media platforms and content creation devices. The proposed amendments could enhance the law’s effectiveness, but they also introduce new challenges for compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...