AI Liability: Understanding the Risks and Responsibilities

Risks and Responsibilities: Exploring AI Liabilities

Artificial intelligence (AI) is increasingly pervasive in business and social life. From ChatGPT to chatbots, we are becoming more accustomed to its various applications simplifying work processes, handling mundane tasks, and increasingly making decisions.

While AI offers tremendous potential for both businesses and individuals, its growing use also brings significant risks. Algorithmic bias, discrimination, deepfakes, privacy concerns, and lack of transparency can erode trust in AI and the organisations that utilise it.

Bodies like the European Union, through initiatives such as the EU AI Act, are working to encourage the adoption of human-centric and trustworthy AI. Their goal is to ensure robust protection for health, safety, fundamental rights, democracy, and the rule of law against the potential harms of AI systems while also fostering innovation and supporting the internal market’s functionality.

Emerging Legal Challenges

While the push to make AI systems safe, transparent, traceable, non-discriminatory, and environmentally friendly is highly commendable, it appears inevitable that AI-related disputes will rise globally in the coming years. Courts will face the challenge of applying traditional legal concepts to these emerging technologies.

Regulatory Shifts in the EU

AI is an extremely complex issue, and AI liability is even more so. There are currently no easy fixes for these issues. According to legal experts, the unique characteristics of AI systems pose novel liability questions, and it is unclear whether current regimes will be fit for purpose in compensating for damage suffered when AI systems fail.

The amendments to the EU’s Product Liability Directive (PLD) seek to address some of these issues and bring AI systems into the strict product liability regime. This new legislation expands the scope of claims to cover AI systems and standalone software. However, we are at a very early stage in considering AI and liability. No legal precedents have been set, and it remains to be seen how the courts will interpret the new PLD and apply existing laws such as negligence to questions of liability.

This new legislation will make it easier for consumers to bring claims regarding failing AI systems placed on the market in the EU, and the new presumptions of defect and causation significantly increase liability risk for AI developers and deployers. The opaque nature of this technology means that liability routes do not easily fit within the existing rules, making liability difficult to assess.

The ‘Black Box’ Paradigm

One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.

Not being able to see how the AI system has come to its decision, continuously learned, or been trained, and whether this can be traced back to the manufacturer or developer, complicates accountability. The new PLD seeks to address this issue, ensuring that it would not be a bar to a consumer claim if they cannot read what is inside the black box.

The presumptions of causation are designed to resolve the black box problem, making it easier for consumers to bring claims when the technical or scientific evidence is excessively difficult or the AI system itself is too complex. If they can demonstrate that the product contributed to the damage and it is likely defective, courts will apply a rebuttable presumption that defendants must disprove.

Strict Liability and Regulatory Frameworks

From a legislative standpoint, there have been significant developments in recent years. In the EU, the AI Act and the new PLD are often described as two sides of the same coin. The regulatory and liability frameworks are closely connected, and any non-compliance with mandatory requirements under the AI Act will likely lead to increased strict liability risks under the new PLD.

Although the liability impact of the EU AI Act is not as comprehensive without the shelved AI Liability Directive, general tort laws will continue to apply. In EU member states, this generally entails that anyone who causes damages by violating a legal obligation is held to compensate for the suffered damages.

The policy framework in the EU essentially obliges operators to reduce the potential for risk from AI. When an issue does arise, there are consequences for the operators, either through regulatory enforcement and fines or civil liability at the suit of the parties harmed.

Approaches to Mitigating AI Risks

As the AI landscape rapidly advances, companies must prepare for potential risks associated with AI failures. To manage and mitigate liability, they can take proactive steps to address pertinent issues.

Three key ways companies can manage liability risks associated with AI failures include:

  1. Contractual Protections: Companies can negotiate contractual promises regarding an AI system’s functionality and seek damages if these promises are broken.
  2. Managing Liability to Customers: In business-to-business contexts, risk allocation is flexible, but pushing the risk of AI failure onto customers depends on the specific context.
  3. Implementing Internal Systems: Companies should reduce AI failure risks and quickly identify issues through internal risk management systems.

Thorough risk assessments covering data privacy concerns, cybersecurity protections and vulnerabilities, algorithm bias, and regulatory compliance are crucial. Identifying high-risk systems under the AI Act and formulating an AI compliance plan is essential for ensuring that regulatory requirements are met.

Anticipating Future Legal Frameworks

As regulators strive to keep up with the evolving AI landscape, companies must take proactive measures to protect themselves. Navigating AI liability will remain challenging, especially as policymakers update product liability laws. The implementation of such technologies may require imposing a duty of care on the deployers of AI tools to prevent damages that cannot be compensated due to opaque responsibilities.

Ultimately, developers, manufacturers, and users will need to collaborate to mitigate liability risks and ensure the safe integration of AI systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...