xAI Challenges California’s Training Data Transparency Act

xAI Challenges California’s Training Data Transparency Act

On December 29, 2025, xAI, the developer of the artificial intelligence (AI) chatbot Grok, filed a lawsuit seeking to invalidate California’s Generative Artificial Intelligence: Training Data Transparency Act (TDTA). The TDTA, which took effect on January 1, 2026, mandates that developers of generative AI systems or services publicly disclose certain information about the datasets used to train their models.

The law requires AI developers to post high-level summaries of the datasets used in the development of any generative AI system or service made available since January 2022. This disclosure includes 12 enumerated categories of information that must be presented on their websites.

Legal Arguments by xAI

In its complaint, xAI seeks a declaration that the TDTA violates the U.S. Constitution and a permanent injunction preventing the California attorney general from enforcing the law. Central to xAI’s argument is its claim that the law requires public disclosure of its trade secrets.

The complaint alleges that compelling such disclosure amounts to an uncompensated taking of xAI’s trade secrets, in violation of the Fifth Amendment’s Takings Clause, which prohibits the government from taking private property without just compensation.

xAI contends that the quality and uniqueness of training data are crucial to an AI model’s performance and competitive advantage. Consequently, AI developers invest heavily in identifying high-quality data sources that competitors are not using while maintaining the secrecy of such datasets.

According to the complaint, by compelling xAI to disclose how its datasets further the intended purpose of its models, the number of data points (including the number of tokens), and the types of data xAI has selected for developing its AI models, the TDTA effects an unconstitutional taking by “eviscerating xAI’s ability to exclude others from accessing that information,” thereby nullifying the value of its trade secrets.

Ambiguity and Vague Terms

xAI further argues that, to the extent that the law requires revealing the sources of its datasets “beyond the Internet writ large,” such a disclosure would also appropriate xAI’s trade secrets. These claims depend on whether the TDTA indeed requires disclosure of information that constitutes trade secrets.

xAI’s complaint acknowledges that the law does not specify how much information a “high-level” summary must disclose, and no guidance has been provided regarding the level of detail required for compliance. This ambiguity, alongside other unclear terms in the law, forms the basis of xAI’s additional argument that the law is unconstitutionally vague and violates the 14th Amendment’s Due Process Clause.

First Amendment Concerns

Separately, xAI also claims that the law violates the First Amendment by compelling speech through the required dissemination of specific information.

Implications for AI Developers

As states continue to enact laws demanding transparency regarding how AI systems are developed and trained, the absence of federal legislation leaves AI developers vulnerable to the TDTA or similar emerging laws. This scenario is likely to lead to comparable claims of trade secret misappropriation and constitutional challenges.

Courts will be tasked with navigating the tension between meaningful transparency to protect consumers and the preservation of trade secrets and other competitively sensitive information to promote innovation in the AI sector.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...