Litigation Risks in the Age of Artificial Intelligence

Artificial Intelligence: Growing Litigation Risk

Businesses are increasingly integrating AI into their working practices, with recent reports indicating that 65% of respondents in a “State of AI” survey are now regularly using generative AI in their organizations. This marks a significant increase from previous years, reflecting the rapid pace at which AI technology is evolving. As a result, legislators around the globe are striving to keep up with these advancements.

The EU AI Act became law in August, while the UK is anticipating the introduction of an Artificial Intelligence Bill. Furthermore, the EU has introduced the Revised Product Liability Directive, which aims to simplify the process for consumers to file claims against companies when harmed by AI products. Notably, this directive includes a reversal of the burden of proof in specific situations, shifting the responsibility to the defendant to demonstrate that the product was not defective.

Exponential Growth in AI Litigation Very Possible

Despite the advancements, many still perceive AI as a “black box.” Questions arise regarding the capability and knowledge of AI models: Do we truly understand their power? Are they capable of hallucinating or exhibiting bias? As AI usage proliferates, its increasing complexity and the accompanying legislation pave the way for a potential surge in litigation associated with both the manufacture and use of AI technologies.

Litigation related to AI has primarily focused on the development of relevant technologies, with several claims initiated against manufacturers for alleged breaches of intellectual property rights. However, as businesses adopt AI into their operations, claims concerning its usage are emerging under both contract and tort law. For instance, in the case of Leeway Services Ltd v Amazon, Leeway alleged that Amazon’s AI systems led to its wrongful suspension from trading on the online marketplace. Similarly, in Tyndaris SAM v MMWWVWM Limited, VWM contended that Tyndaris misrepresented the capabilities of an AI-powered system. Although neither case has reached trial, a recent Canadian ruling in Moffat v Air Canada found that Air Canada failed to ensure the accuracy of responses provided by its chatbot.

The Regulators Are Taking Notice

Regulators are becoming increasingly vigilant regarding AI. In the UK, the Information Commissioner’s Office has published a strategic approach to AI, while the Financial Conduct Authority is working to better understand the risks and opportunities that AI presents within the financial services sector. There is a growing likelihood of heightened regulatory scrutiny concerning whether companies have made false or misleading public statements about their AI usage. In March 2024, the US Securities and Exchange Commission reached settlements with two investment advisers over “AI washing” practices.

As regulatory focus intensifies, the potential for private claims to arise in response to adverse regulatory findings increases significantly.

Mass Claims a Risk

Group litigation poses a significant risk for both AI manufacturers and businesses that utilize AI. The nature of AI allows for the rapid dissemination of errors, potentially affecting large groups before they are detected. While the individual loss may be minimal, the cumulative harm could be substantial.

Despite the English Supreme Court’s 2021 ruling in Lloyd v Google, which suggested that pursuing group claims in England is challenging, various structures exist for such claims. The Supreme Court indicated that common issues among claimants could be addressed in a preliminary hearing, allowing individual claimants to pursue their losses based on that representative decision. Recent cases have explored this question, including Commission Recovery Limited v Marks and Clerk LLP and Prismall v Google UK Ltd and DeepMind Technologies Ltd.

Additionally, claimants can opt for a group litigation order (GLO) for joint management of claims involving related issues. Alternatively, numerous claimants can independently pursue their claims in a consolidated multiparty proceeding, as seen in the ongoing Município de Mariana v BHP Group actions.

Collective proceedings under the UK’s Competition Appeals Tribunal (CAT) are also on the rise. These claims must be grounded in competition law, yet parties are increasingly framing consumer protection actions as anti-competitive conduct claims to leverage this framework. Tech giants, including Microsoft, Meta, Alphabet/Google, and Apple, are increasingly targeted in such actions.

Regardless of the approach taken, it appears inevitable that a group claim related to the development or use of AI will be presented before the English Courts in the near future.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...