2025 AI Privacy Litigation Trends: Key Insights and Developments

Year in Review: 2025 Artificial Intelligence-Privacy Litigation Trends

As AI systems became more deeply embedded in consumer-facing products and services throughout 2025, regulators and private plaintiffs continued to test how existing privacy and consumer-protection laws apply to the collection, use, and commercialization of personal data in AI development and deployment.

Over the past year, federal and state enforcement agencies intensified their scrutiny of AI-related practices, focusing in particular on unsubstantiated marketing claims, opaque or inaccurate data use disclosures, and risks to children. At the same time, private litigants advanced a wide range of novel theories—challenging everything from AI training practices to the use of automated chatbots under long-standing electronic-communications statutes. Courts responded with mixed results, offering early but important signals about disclosure obligations, consent, and the limits of applying legacy privacy laws to emerging technologies.

I. Consumer-Protection Actions

In 2025, government entities continued to scrutinize data-related practices of AI models and AI-enabled products and services. While many enforcement actions did not turn exclusively on privacy-related theories of liability, they nevertheless reflect growing interest in how companies collect, use, and describe data in connection with AI.

State Actions

State attorneys general (AGs) increased their focus on AI-related consumer-protection and privacy risks throughout 2025. A bipartisan coalition of state AGs issued a joint warning to leading AI developers, emphasizing that companies would be held accountable for harms stemming from AI systems’ access to and use of consumer data—especially where those systems may affect children. The AG for Texas announced an investigation focused on alleged representations that chatbots can serve therapeutic purposes. These ongoing efforts suggest that, even absent comprehensive federal AI legislation, state regulators are prepared to use existing consumer-protection tools to influence AI product design and data-governance practices.

Federal Actions

At the federal level, the Federal Trade Commission (FTC) continued to leverage its authority over consumer-protection matters to scrutinize companies developing or deploying AI tools, with a focus on allegedly deceptive or unsubstantiated marketing claims. Late in the Biden administration, the FTC launched “Operation AI Comply”, an enforcement initiative aimed at curbing false or misleading representations about AI capabilities and outcomes. The FTC brought several Section 5 unfair or deceptive conduct actions against companies accused of overstating the capabilities or benefits of their AI products, seeking injunctions, monetary relief, and—in at least one case—a permanent ban on offering AI-related services.

In parallel, the agency distributed more than $15 million in connection with allegations that a developer using AI tools stored, used, and sold consumer information without their knowledge. This action underscored the connection between traditional privacy theories and consumer-protection enforcement against developers harnessing AI.

Private Actions

Private plaintiffs tested increasingly novel consumer-protection theories in cases challenging AI development and deployment. For instance, in one lawsuit, a plaintiff alleged that a company had unlawfully exploited the “cognitive labor” generated through user interactions with its AI system by capturing and using that data without compensation. Although the court ultimately dismissed the claims for failure to state a cognizable legal theory, the case illustrates the creative—and occasionally expansive—approaches plaintiffs have pursued in attempting to characterize AI data practices as unfair or deceptive.

II. Privacy Laws

A second—and increasingly consequential—strand of AI-privacy litigation in 2025 involved efforts to extend existing electronic-communications and privacy statutes to AI-enabled tools and data-collection practices. Courts were asked to determine whether long-standing prohibitions on unauthorized interception, disclosure, or misuse of personal information can accommodate technologies that replace or augment human interaction, collect data at scale, and repurpose that data for model development or improvement.

AI Chatbots and Electronic-Communications Statutes

Several cases tested whether AI chatbots deployed in customer-service or consumer-interaction settings constitute unlawful interception under state and federal electronic-communications laws. In Taylor v. ConverseNow Technologies, a federal court allowed a putative class action claim under the California Invasion of Privacy Act (CIPA) against an SaaS company that allows restaurants to process customer phone calls using an AI assistant. The court focused on whether the chatbot provider could be treated as a “third party” interceptor, distinguishing between data used exclusively to benefit the consumer and data leveraged for the provider’s own commercial purposes, including system improvement.

In contrast, other courts have been more skeptical of attempts to apply electronic-communications statutes to AI training practices. In Rodriguez v. ByteDance, the court dismissed claims brought under CIPA and the federal Electronic Communications Privacy Act, concluding that allegations that the technology company used personal data to train AI systems were overly speculative absent more concrete facts about interception or disclosure.

AI Training Data and Invasion-of-Privacy Claims

Some lawsuits also involved allegations that companies collected or repurposed consumer data without adequate disclosure or consent. In Riganian v. LiveRamp, a putative class of consumers survived early dismissal after alleging that a data broker used AI tools to collect, combine, and sell personal information drawn from both online and offline sources. The court concluded that plaintiffs had plausibly alleged invasive and nonconsensual data practices sufficient to support common-law privacy claims under California law, as well as CIPA and the federal Wiretap Act.

III. Related Developments—State Legislative Action and the Courts

While privacy-related AI litigation continued to develop in the courts in 2025, state legislatures and court systems also took steps that might affect the future of privacy-related AI litigation.

In 2025, state legislatures across the country focused on AI regulation, with California, Colorado, and Texas working to implement new laws expressly addressing AI systems. More than half of the states enacted laws aimed at addressing privacy concerns stemming from the creation and spread of “deepfakes”—malicious digital alterations and dissemination of a person’s body or voice. State legislators and AGs continue to broadly oppose federal preemption of state AI laws, allowing states to maintain a role in AI governance.

Courts emerged as important institutional actors in AI governance. For example, the Arkansas Supreme Court adopted a rule requiring legal professionals to verify that AI tools used in connection with court work do not retain or reuse confidential data, warning that failure to do so could constitute professional misconduct. Other jurisdictions, including New York and Pennsylvania, issued similar guidance restricting the use of generative AI in ways that could compromise client confidentiality or judicial integrity.

Companies developing or deploying AI technologies should continue to monitor this rapidly evolving landscape as courts, regulators, and legislatures refine the contours of permissible data use.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...