Artificial Intelligence: Growing Litigation Risk
Businesses are increasingly integrating AI into their working practices, with recent reports indicating that 65% of respondents in a “State of AI” survey are now regularly using generative AI in their organizations. This marks a significant increase from previous years, reflecting the rapid pace at which AI technology is evolving. As a result, legislators around the globe are striving to keep up with these advancements.
The EU AI Act became law in August, while the UK is anticipating the introduction of an Artificial Intelligence Bill. Furthermore, the EU has introduced the Revised Product Liability Directive, which aims to simplify the process for consumers to file claims against companies when harmed by AI products. Notably, this directive includes a reversal of the burden of proof in specific situations, shifting the responsibility to the defendant to demonstrate that the product was not defective.
Exponential Growth in AI Litigation Very Possible
Despite the advancements, many still perceive AI as a “black box.” Questions arise regarding the capability and knowledge of AI models: Do we truly understand their power? Are they capable of hallucinating or exhibiting bias? As AI usage proliferates, its increasing complexity and the accompanying legislation pave the way for a potential surge in litigation associated with both the manufacture and use of AI technologies.
Litigation related to AI has primarily focused on the development of relevant technologies, with several claims initiated against manufacturers for alleged breaches of intellectual property rights. However, as businesses adopt AI into their operations, claims concerning its usage are emerging under both contract and tort law. For instance, in the case of Leeway Services Ltd v Amazon, Leeway alleged that Amazon’s AI systems led to its wrongful suspension from trading on the online marketplace. Similarly, in Tyndaris SAM v MMWWVWM Limited, VWM contended that Tyndaris misrepresented the capabilities of an AI-powered system. Although neither case has reached trial, a recent Canadian ruling in Moffat v Air Canada found that Air Canada failed to ensure the accuracy of responses provided by its chatbot.
The Regulators Are Taking Notice
Regulators are becoming increasingly vigilant regarding AI. In the UK, the Information Commissioner’s Office has published a strategic approach to AI, while the Financial Conduct Authority is working to better understand the risks and opportunities that AI presents within the financial services sector. There is a growing likelihood of heightened regulatory scrutiny concerning whether companies have made false or misleading public statements about their AI usage. In March 2024, the US Securities and Exchange Commission reached settlements with two investment advisers over “AI washing” practices.
As regulatory focus intensifies, the potential for private claims to arise in response to adverse regulatory findings increases significantly.
Mass Claims a Risk
Group litigation poses a significant risk for both AI manufacturers and businesses that utilize AI. The nature of AI allows for the rapid dissemination of errors, potentially affecting large groups before they are detected. While the individual loss may be minimal, the cumulative harm could be substantial.
Despite the English Supreme Court’s 2021 ruling in Lloyd v Google, which suggested that pursuing group claims in England is challenging, various structures exist for such claims. The Supreme Court indicated that common issues among claimants could be addressed in a preliminary hearing, allowing individual claimants to pursue their losses based on that representative decision. Recent cases have explored this question, including Commission Recovery Limited v Marks and Clerk LLP and Prismall v Google UK Ltd and DeepMind Technologies Ltd.
Additionally, claimants can opt for a group litigation order (GLO) for joint management of claims involving related issues. Alternatively, numerous claimants can independently pursue their claims in a consolidated multiparty proceeding, as seen in the ongoing Município de Mariana v BHP Group actions.
Collective proceedings under the UK’s Competition Appeals Tribunal (CAT) are also on the rise. These claims must be grounded in competition law, yet parties are increasingly framing consumer protection actions as anti-competitive conduct claims to leverage this framework. Tech giants, including Microsoft, Meta, Alphabet/Google, and Apple, are increasingly targeted in such actions.
Regardless of the approach taken, it appears inevitable that a group claim related to the development or use of AI will be presented before the English Courts in the near future.