AI Liability: Understanding the Risks and Responsibilities

Risks and Responsibilities: Exploring AI Liabilities

Artificial intelligence (AI) is increasingly pervasive in business and social life. From ChatGPT to chatbots, we are becoming more accustomed to its various applications simplifying work processes, handling mundane tasks, and increasingly making decisions.

While AI offers tremendous potential for both businesses and individuals, its growing use also brings significant risks. Algorithmic bias, discrimination, deepfakes, privacy concerns, and lack of transparency can erode trust in AI and the organisations that utilise it.

Bodies like the European Union, through initiatives such as the EU AI Act, are working to encourage the adoption of human-centric and trustworthy AI. Their goal is to ensure robust protection for health, safety, fundamental rights, democracy, and the rule of law against the potential harms of AI systems while also fostering innovation and supporting the internal market’s functionality.

Emerging Legal Challenges

While the push to make AI systems safe, transparent, traceable, non-discriminatory, and environmentally friendly is highly commendable, it appears inevitable that AI-related disputes will rise globally in the coming years. Courts will face the challenge of applying traditional legal concepts to these emerging technologies.

Regulatory Shifts in the EU

AI is an extremely complex issue, and AI liability is even more so. There are currently no easy fixes for these issues. According to legal experts, the unique characteristics of AI systems pose novel liability questions, and it is unclear whether current regimes will be fit for purpose in compensating for damage suffered when AI systems fail.

The amendments to the EU’s Product Liability Directive (PLD) seek to address some of these issues and bring AI systems into the strict product liability regime. This new legislation expands the scope of claims to cover AI systems and standalone software. However, we are at a very early stage in considering AI and liability. No legal precedents have been set, and it remains to be seen how the courts will interpret the new PLD and apply existing laws such as negligence to questions of liability.

This new legislation will make it easier for consumers to bring claims regarding failing AI systems placed on the market in the EU, and the new presumptions of defect and causation significantly increase liability risk for AI developers and deployers. The opaque nature of this technology means that liability routes do not easily fit within the existing rules, making liability difficult to assess.

The ‘Black Box’ Paradigm

One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.

Not being able to see how the AI system has come to its decision, continuously learned, or been trained, and whether this can be traced back to the manufacturer or developer, complicates accountability. The new PLD seeks to address this issue, ensuring that it would not be a bar to a consumer claim if they cannot read what is inside the black box.

The presumptions of causation are designed to resolve the black box problem, making it easier for consumers to bring claims when the technical or scientific evidence is excessively difficult or the AI system itself is too complex. If they can demonstrate that the product contributed to the damage and it is likely defective, courts will apply a rebuttable presumption that defendants must disprove.

Strict Liability and Regulatory Frameworks

From a legislative standpoint, there have been significant developments in recent years. In the EU, the AI Act and the new PLD are often described as two sides of the same coin. The regulatory and liability frameworks are closely connected, and any non-compliance with mandatory requirements under the AI Act will likely lead to increased strict liability risks under the new PLD.

Although the liability impact of the EU AI Act is not as comprehensive without the shelved AI Liability Directive, general tort laws will continue to apply. In EU member states, this generally entails that anyone who causes damages by violating a legal obligation is held to compensate for the suffered damages.

The policy framework in the EU essentially obliges operators to reduce the potential for risk from AI. When an issue does arise, there are consequences for the operators, either through regulatory enforcement and fines or civil liability at the suit of the parties harmed.

Approaches to Mitigating AI Risks

As the AI landscape rapidly advances, companies must prepare for potential risks associated with AI failures. To manage and mitigate liability, they can take proactive steps to address pertinent issues.

Three key ways companies can manage liability risks associated with AI failures include:

  1. Contractual Protections: Companies can negotiate contractual promises regarding an AI system’s functionality and seek damages if these promises are broken.
  2. Managing Liability to Customers: In business-to-business contexts, risk allocation is flexible, but pushing the risk of AI failure onto customers depends on the specific context.
  3. Implementing Internal Systems: Companies should reduce AI failure risks and quickly identify issues through internal risk management systems.

Thorough risk assessments covering data privacy concerns, cybersecurity protections and vulnerabilities, algorithm bias, and regulatory compliance are crucial. Identifying high-risk systems under the AI Act and formulating an AI compliance plan is essential for ensuring that regulatory requirements are met.

Anticipating Future Legal Frameworks

As regulators strive to keep up with the evolving AI landscape, companies must take proactive measures to protect themselves. Navigating AI liability will remain challenging, especially as policymakers update product liability laws. The implementation of such technologies may require imposing a duty of care on the deployers of AI tools to prevent damages that cannot be compensated due to opaque responsibilities.

Ultimately, developers, manufacturers, and users will need to collaborate to mitigate liability risks and ensure the safe integration of AI systems.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...