AI Liability: Understanding the Risks and Responsibilities

Risks and Responsibilities: Exploring AI Liabilities

Artificial intelligence (AI) is increasingly pervasive in business and social life. From ChatGPT to chatbots, we are becoming more accustomed to its various applications simplifying work processes, handling mundane tasks, and increasingly making decisions.

While AI offers tremendous potential for both businesses and individuals, its growing use also brings significant risks. Algorithmic bias, discrimination, deepfakes, privacy concerns, and lack of transparency can erode trust in AI and the organisations that utilise it.

Bodies like the European Union, through initiatives such as the EU AI Act, are working to encourage the adoption of human-centric and trustworthy AI. Their goal is to ensure robust protection for health, safety, fundamental rights, democracy, and the rule of law against the potential harms of AI systems while also fostering innovation and supporting the internal market’s functionality.

Emerging Legal Challenges

While the push to make AI systems safe, transparent, traceable, non-discriminatory, and environmentally friendly is highly commendable, it appears inevitable that AI-related disputes will rise globally in the coming years. Courts will face the challenge of applying traditional legal concepts to these emerging technologies.

Regulatory Shifts in the EU

AI is an extremely complex issue, and AI liability is even more so. There are currently no easy fixes for these issues. According to legal experts, the unique characteristics of AI systems pose novel liability questions, and it is unclear whether current regimes will be fit for purpose in compensating for damage suffered when AI systems fail.

The amendments to the EU’s Product Liability Directive (PLD) seek to address some of these issues and bring AI systems into the strict product liability regime. This new legislation expands the scope of claims to cover AI systems and standalone software. However, we are at a very early stage in considering AI and liability. No legal precedents have been set, and it remains to be seen how the courts will interpret the new PLD and apply existing laws such as negligence to questions of liability.

This new legislation will make it easier for consumers to bring claims regarding failing AI systems placed on the market in the EU, and the new presumptions of defect and causation significantly increase liability risk for AI developers and deployers. The opaque nature of this technology means that liability routes do not easily fit within the existing rules, making liability difficult to assess.

The ‘Black Box’ Paradigm

One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.

Not being able to see how the AI system has come to its decision, continuously learned, or been trained, and whether this can be traced back to the manufacturer or developer, complicates accountability. The new PLD seeks to address this issue, ensuring that it would not be a bar to a consumer claim if they cannot read what is inside the black box.

The presumptions of causation are designed to resolve the black box problem, making it easier for consumers to bring claims when the technical or scientific evidence is excessively difficult or the AI system itself is too complex. If they can demonstrate that the product contributed to the damage and it is likely defective, courts will apply a rebuttable presumption that defendants must disprove.

Strict Liability and Regulatory Frameworks

From a legislative standpoint, there have been significant developments in recent years. In the EU, the AI Act and the new PLD are often described as two sides of the same coin. The regulatory and liability frameworks are closely connected, and any non-compliance with mandatory requirements under the AI Act will likely lead to increased strict liability risks under the new PLD.

Although the liability impact of the EU AI Act is not as comprehensive without the shelved AI Liability Directive, general tort laws will continue to apply. In EU member states, this generally entails that anyone who causes damages by violating a legal obligation is held to compensate for the suffered damages.

The policy framework in the EU essentially obliges operators to reduce the potential for risk from AI. When an issue does arise, there are consequences for the operators, either through regulatory enforcement and fines or civil liability at the suit of the parties harmed.

Approaches to Mitigating AI Risks

As the AI landscape rapidly advances, companies must prepare for potential risks associated with AI failures. To manage and mitigate liability, they can take proactive steps to address pertinent issues.

Three key ways companies can manage liability risks associated with AI failures include:

  1. Contractual Protections: Companies can negotiate contractual promises regarding an AI system’s functionality and seek damages if these promises are broken.
  2. Managing Liability to Customers: In business-to-business contexts, risk allocation is flexible, but pushing the risk of AI failure onto customers depends on the specific context.
  3. Implementing Internal Systems: Companies should reduce AI failure risks and quickly identify issues through internal risk management systems.

Thorough risk assessments covering data privacy concerns, cybersecurity protections and vulnerabilities, algorithm bias, and regulatory compliance are crucial. Identifying high-risk systems under the AI Act and formulating an AI compliance plan is essential for ensuring that regulatory requirements are met.

Anticipating Future Legal Frameworks

As regulators strive to keep up with the evolving AI landscape, companies must take proactive measures to protect themselves. Navigating AI liability will remain challenging, especially as policymakers update product liability laws. The implementation of such technologies may require imposing a duty of care on the deployers of AI tools to prevent damages that cannot be compensated due to opaque responsibilities.

Ultimately, developers, manufacturers, and users will need to collaborate to mitigate liability risks and ensure the safe integration of AI systems.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...