Litigation Risks in the Age of Artificial Intelligence

Artificial Intelligence: Growing Litigation Risk

Businesses are increasingly integrating AI into their working practices, with recent reports indicating that 65% of respondents in a “State of AI” survey are now regularly using generative AI in their organizations. This marks a significant increase from previous years, reflecting the rapid pace at which AI technology is evolving. As a result, legislators around the globe are striving to keep up with these advancements.

The EU AI Act became law in August, while the UK is anticipating the introduction of an Artificial Intelligence Bill. Furthermore, the EU has introduced the Revised Product Liability Directive, which aims to simplify the process for consumers to file claims against companies when harmed by AI products. Notably, this directive includes a reversal of the burden of proof in specific situations, shifting the responsibility to the defendant to demonstrate that the product was not defective.

Exponential Growth in AI Litigation Very Possible

Despite the advancements, many still perceive AI as a “black box.” Questions arise regarding the capability and knowledge of AI models: Do we truly understand their power? Are they capable of hallucinating or exhibiting bias? As AI usage proliferates, its increasing complexity and the accompanying legislation pave the way for a potential surge in litigation associated with both the manufacture and use of AI technologies.

Litigation related to AI has primarily focused on the development of relevant technologies, with several claims initiated against manufacturers for alleged breaches of intellectual property rights. However, as businesses adopt AI into their operations, claims concerning its usage are emerging under both contract and tort law. For instance, in the case of Leeway Services Ltd v Amazon, Leeway alleged that Amazon’s AI systems led to its wrongful suspension from trading on the online marketplace. Similarly, in Tyndaris SAM v MMWWVWM Limited, VWM contended that Tyndaris misrepresented the capabilities of an AI-powered system. Although neither case has reached trial, a recent Canadian ruling in Moffat v Air Canada found that Air Canada failed to ensure the accuracy of responses provided by its chatbot.

The Regulators Are Taking Notice

Regulators are becoming increasingly vigilant regarding AI. In the UK, the Information Commissioner’s Office has published a strategic approach to AI, while the Financial Conduct Authority is working to better understand the risks and opportunities that AI presents within the financial services sector. There is a growing likelihood of heightened regulatory scrutiny concerning whether companies have made false or misleading public statements about their AI usage. In March 2024, the US Securities and Exchange Commission reached settlements with two investment advisers over “AI washing” practices.

As regulatory focus intensifies, the potential for private claims to arise in response to adverse regulatory findings increases significantly.

Mass Claims a Risk

Group litigation poses a significant risk for both AI manufacturers and businesses that utilize AI. The nature of AI allows for the rapid dissemination of errors, potentially affecting large groups before they are detected. While the individual loss may be minimal, the cumulative harm could be substantial.

Despite the English Supreme Court’s 2021 ruling in Lloyd v Google, which suggested that pursuing group claims in England is challenging, various structures exist for such claims. The Supreme Court indicated that common issues among claimants could be addressed in a preliminary hearing, allowing individual claimants to pursue their losses based on that representative decision. Recent cases have explored this question, including Commission Recovery Limited v Marks and Clerk LLP and Prismall v Google UK Ltd and DeepMind Technologies Ltd.

Additionally, claimants can opt for a group litigation order (GLO) for joint management of claims involving related issues. Alternatively, numerous claimants can independently pursue their claims in a consolidated multiparty proceeding, as seen in the ongoing Município de Mariana v BHP Group actions.

Collective proceedings under the UK’s Competition Appeals Tribunal (CAT) are also on the rise. These claims must be grounded in competition law, yet parties are increasingly framing consumer protection actions as anti-competitive conduct claims to leverage this framework. Tech giants, including Microsoft, Meta, Alphabet/Google, and Apple, are increasingly targeted in such actions.

Regardless of the approach taken, it appears inevitable that a group claim related to the development or use of AI will be presented before the English Courts in the near future.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...