AI Securities Litigation: Trends and Implications for 2026

2025’s Defining AI Securities Litigation

In 2025, securities litigation concerning artificial intelligence claims escalated significantly. What began as a trickle of exploratory cases prior to 2024 transformed into a sustained wave throughout 2024 and into 2025, as plaintiffs’ counsel increasingly scrutinized AI-related disclosures.

The statistics support this trend: AI-related securities filings doubled from seven in 2023 to 15 in 2024, with an additional 14 cases filed in the first three quarters of 2025. This surge reflects a familiar cycle where markets reward AI innovation, prompting companies to highlight their AI capabilities, while plaintiffs’ counsel closely monitor discrepancies between disclosures and actual performance, particularly when stock prices dip.

The Stakes Are Rising

The legal landscape is evolving as courts apply established securities law doctrines, including puffery, scienter, materiality, and forward-looking statement safe harbors to AI-related claims. Although the legal principles remain consistent, the technological context is new, and companies must pay attention to these developments.

The Landscape: AI Securities Litigation Comes of Age

AI has infiltrated numerous sectors, including finance, healthcare, logistics, education, and retail. For public companies, an effective AI strategy is now a critical differentiator and often a driver of company valuation. The current administration has reinforced the significance of business leadership in AI through a December 11, 2025, executive order outlining a national policy framework for AI.

This perceived value creates incentives for companies to prominently feature their AI capabilities, inevitably attracting scrutiny from plaintiffs’ counsel. Most AI-related securities cases fit into one of three categories:

  1. AI-washing: Allegations that companies exaggerated their AI capabilities.
  2. Capability-challenge cases: Claims that AI-enabled products failed to perform as marketed.
  3. Conventional fraud theories: Traditional securities claims adapted for the AI context.

Litigation is not the only risk; regulatory scrutiny is also intensifying. In May 2025, an enforcement attorney from the U.S. Securities and Exchange Commission (SEC) highlighted the agency’s priority in “rooting out the misuse” of AI through its newly established cyber and emerging technologies unit. This unit is tasked with investigating whether companies accurately describe their AI technology and communicate responsibly with investors.

2025’s Defining Cases: Emerging Judicial Frameworks

Three pivotal decisions from 2025—pertaining to General Motors Co./Cruise LLC, GitLab, and Tesla—offer insights into how courts will evaluate AI-related disclosures going forward.

The GM/Cruise Decision: When Technical Jargon Cuts Against You

On March 28, the U.S. District Court for the Eastern District of Michigan ruled on the In re: General Motors Co. Securities Litigation case, which centered on GM’s self-driving car unit, Cruise. The court made a significant distinction between common language and technical AI terminology.

Plaintiffs alleged that GM and Cruise exaggerated the readiness of their autonomous vehicles for a revenue-generating driverless taxi service, particularly challenging terms like “fully autonomous” and “Level 4 autonomy.”

The court treated these disclosures differently:

  • Plain English statements, such as “fully driverless,” were dismissed as nonactionable.
  • Technical statements concerning Level 4 autonomy were permitted to proceed on falsity grounds, as the court found it challenging to evaluate whether the vehicles met the Society of Automotive Engineers’ criteria.

This disparity suggests that courts may be more willing to entertain claims involving specialized AI terminology, increasing litigation risks for companies.

The GitLab Decision: The Power of “We Believe”

On August 14, in Dolly v. GitLab Inc., the U.S. District Court for the Northern District of California demonstrated how subjective qualifiers can mitigate the impact of AI-related statements. GitLab faced claims of overstating its AI platform’s capabilities, but the court dismissed the complaint, noting that the challenged statements were largely forward-looking or mere puffery.

The court emphasized GitLab’s use of opinion language such as “we believe,” which signaled corporate optimism rather than verifiable fact, allowing the court to sidestep a complex technological inquiry.

The Tesla Decision: AI Complexity as a Shield Against Scienter

On December 2, the U.S. Court of Appeals for the Ninth Circuit upheld the dismissal of claims against Tesla and Elon Musk regarding overstated capabilities of Tesla’s autonomous driving technology. Plaintiffs alleged misleading statements over several years, but the court noted two key factors:

  • Musk’s statements did not claim that autonomous driving was safer than human drivers, only that it could assist in safer driving.
  • The court recognized the complexity of AI technology, which helped undermine any inference of fraudulent intent regarding missed timelines.

This ruling suggests that courts may hesitate to infer scienter when companies convey the uncertainties and complexities inherent in AI development.

Looking Ahead: What To Watch in 2026

Several trends are emerging for the upcoming year:

  • Heightened Regulatory Scrutiny: The SEC’s Cyber and Emerging Technologies Unit is set to expand its expertise, leading to more investigations targeting AI-washing and capability exaggerations.
  • Continued Difficulty With Technical AI Terminology: As courts become more familiar with AI concepts, technical claims will remain at risk of surviving motions to dismiss.
  • Puffery Doctrine Will Be Tested: Plaintiffs may argue that subjective statements linked to specific metrics cross into factual territory.
  • Industry-Specific Frameworks May Emerge: Standards for autonomous vehicle cases may differ from those for software, fintech, or healthcare AI cases.

Navigating the Road Ahead

As AI continues to evolve, so too do the legal risks associated with it. Companies must strike a balance between effectively communicating their AI capabilities and managing the accompanying legal risks. The decisions rendered in 2025 outline the early contours of the law, and 2026 will further define them.

AI is not just reshaping industries; it is also transforming securities litigation. Companies that succeed will be those that communicate their AI capabilities with precision, provide appropriate context and qualifications, and understand that transparency is not only ethical but also a form of effective risk management.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...