Litigation Risks in the Age of Artificial Intelligence

Artificial Intelligence: Growing Litigation Risk

Businesses are increasingly integrating AI into their working practices, with recent reports indicating that 65% of respondents in a “State of AI” survey are now regularly using generative AI in their organizations. This marks a significant increase from previous years, reflecting the rapid pace at which AI technology is evolving. As a result, legislators around the globe are striving to keep up with these advancements.

The EU AI Act became law in August, while the UK is anticipating the introduction of an Artificial Intelligence Bill. Furthermore, the EU has introduced the Revised Product Liability Directive, which aims to simplify the process for consumers to file claims against companies when harmed by AI products. Notably, this directive includes a reversal of the burden of proof in specific situations, shifting the responsibility to the defendant to demonstrate that the product was not defective.

Exponential Growth in AI Litigation Very Possible

Despite the advancements, many still perceive AI as a “black box.” Questions arise regarding the capability and knowledge of AI models: Do we truly understand their power? Are they capable of hallucinating or exhibiting bias? As AI usage proliferates, its increasing complexity and the accompanying legislation pave the way for a potential surge in litigation associated with both the manufacture and use of AI technologies.

Litigation related to AI has primarily focused on the development of relevant technologies, with several claims initiated against manufacturers for alleged breaches of intellectual property rights. However, as businesses adopt AI into their operations, claims concerning its usage are emerging under both contract and tort law. For instance, in the case of Leeway Services Ltd v Amazon, Leeway alleged that Amazon’s AI systems led to its wrongful suspension from trading on the online marketplace. Similarly, in Tyndaris SAM v MMWWVWM Limited, VWM contended that Tyndaris misrepresented the capabilities of an AI-powered system. Although neither case has reached trial, a recent Canadian ruling in Moffat v Air Canada found that Air Canada failed to ensure the accuracy of responses provided by its chatbot.

The Regulators Are Taking Notice

Regulators are becoming increasingly vigilant regarding AI. In the UK, the Information Commissioner’s Office has published a strategic approach to AI, while the Financial Conduct Authority is working to better understand the risks and opportunities that AI presents within the financial services sector. There is a growing likelihood of heightened regulatory scrutiny concerning whether companies have made false or misleading public statements about their AI usage. In March 2024, the US Securities and Exchange Commission reached settlements with two investment advisers over “AI washing” practices.

As regulatory focus intensifies, the potential for private claims to arise in response to adverse regulatory findings increases significantly.

Mass Claims a Risk

Group litigation poses a significant risk for both AI manufacturers and businesses that utilize AI. The nature of AI allows for the rapid dissemination of errors, potentially affecting large groups before they are detected. While the individual loss may be minimal, the cumulative harm could be substantial.

Despite the English Supreme Court’s 2021 ruling in Lloyd v Google, which suggested that pursuing group claims in England is challenging, various structures exist for such claims. The Supreme Court indicated that common issues among claimants could be addressed in a preliminary hearing, allowing individual claimants to pursue their losses based on that representative decision. Recent cases have explored this question, including Commission Recovery Limited v Marks and Clerk LLP and Prismall v Google UK Ltd and DeepMind Technologies Ltd.

Additionally, claimants can opt for a group litigation order (GLO) for joint management of claims involving related issues. Alternatively, numerous claimants can independently pursue their claims in a consolidated multiparty proceeding, as seen in the ongoing Município de Mariana v BHP Group actions.

Collective proceedings under the UK’s Competition Appeals Tribunal (CAT) are also on the rise. These claims must be grounded in competition law, yet parties are increasingly framing consumer protection actions as anti-competitive conduct claims to leverage this framework. Tech giants, including Microsoft, Meta, Alphabet/Google, and Apple, are increasingly targeted in such actions.

Regardless of the approach taken, it appears inevitable that a group claim related to the development or use of AI will be presented before the English Courts in the near future.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...