Litigation Risks in the Age of Artificial Intelligence

Artificial Intelligence: Growing Litigation Risk

Businesses are increasingly integrating AI into their working practices, with recent reports indicating that 65% of respondents in a “State of AI” survey are now regularly using generative AI in their organizations. This marks a significant increase from previous years, reflecting the rapid pace at which AI technology is evolving. As a result, legislators around the globe are striving to keep up with these advancements.

The EU AI Act became law in August, while the UK is anticipating the introduction of an Artificial Intelligence Bill. Furthermore, the EU has introduced the Revised Product Liability Directive, which aims to simplify the process for consumers to file claims against companies when harmed by AI products. Notably, this directive includes a reversal of the burden of proof in specific situations, shifting the responsibility to the defendant to demonstrate that the product was not defective.

Exponential Growth in AI Litigation Very Possible

Despite the advancements, many still perceive AI as a “black box.” Questions arise regarding the capability and knowledge of AI models: Do we truly understand their power? Are they capable of hallucinating or exhibiting bias? As AI usage proliferates, its increasing complexity and the accompanying legislation pave the way for a potential surge in litigation associated with both the manufacture and use of AI technologies.

Litigation related to AI has primarily focused on the development of relevant technologies, with several claims initiated against manufacturers for alleged breaches of intellectual property rights. However, as businesses adopt AI into their operations, claims concerning its usage are emerging under both contract and tort law. For instance, in the case of Leeway Services Ltd v Amazon, Leeway alleged that Amazon’s AI systems led to its wrongful suspension from trading on the online marketplace. Similarly, in Tyndaris SAM v MMWWVWM Limited, VWM contended that Tyndaris misrepresented the capabilities of an AI-powered system. Although neither case has reached trial, a recent Canadian ruling in Moffat v Air Canada found that Air Canada failed to ensure the accuracy of responses provided by its chatbot.

The Regulators Are Taking Notice

Regulators are becoming increasingly vigilant regarding AI. In the UK, the Information Commissioner’s Office has published a strategic approach to AI, while the Financial Conduct Authority is working to better understand the risks and opportunities that AI presents within the financial services sector. There is a growing likelihood of heightened regulatory scrutiny concerning whether companies have made false or misleading public statements about their AI usage. In March 2024, the US Securities and Exchange Commission reached settlements with two investment advisers over “AI washing” practices.

As regulatory focus intensifies, the potential for private claims to arise in response to adverse regulatory findings increases significantly.

Mass Claims a Risk

Group litigation poses a significant risk for both AI manufacturers and businesses that utilize AI. The nature of AI allows for the rapid dissemination of errors, potentially affecting large groups before they are detected. While the individual loss may be minimal, the cumulative harm could be substantial.

Despite the English Supreme Court’s 2021 ruling in Lloyd v Google, which suggested that pursuing group claims in England is challenging, various structures exist for such claims. The Supreme Court indicated that common issues among claimants could be addressed in a preliminary hearing, allowing individual claimants to pursue their losses based on that representative decision. Recent cases have explored this question, including Commission Recovery Limited v Marks and Clerk LLP and Prismall v Google UK Ltd and DeepMind Technologies Ltd.

Additionally, claimants can opt for a group litigation order (GLO) for joint management of claims involving related issues. Alternatively, numerous claimants can independently pursue their claims in a consolidated multiparty proceeding, as seen in the ongoing Município de Mariana v BHP Group actions.

Collective proceedings under the UK’s Competition Appeals Tribunal (CAT) are also on the rise. These claims must be grounded in competition law, yet parties are increasingly framing consumer protection actions as anti-competitive conduct claims to leverage this framework. Tech giants, including Microsoft, Meta, Alphabet/Google, and Apple, are increasingly targeted in such actions.

Regardless of the approach taken, it appears inevitable that a group claim related to the development or use of AI will be presented before the English Courts in the near future.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...