AI Liability: Understanding the Risks and Responsibilities

Risks and Responsibilities: Exploring AI Liabilities

Artificial intelligence (AI) is increasingly pervasive in business and social life. From ChatGPT to chatbots, we are becoming more accustomed to its various applications simplifying work processes, handling mundane tasks, and increasingly making decisions.

While AI offers tremendous potential for both businesses and individuals, its growing use also brings significant risks. Algorithmic bias, discrimination, deepfakes, privacy concerns, and lack of transparency can erode trust in AI and the organisations that utilise it.

Bodies like the European Union, through initiatives such as the EU AI Act, are working to encourage the adoption of human-centric and trustworthy AI. Their goal is to ensure robust protection for health, safety, fundamental rights, democracy, and the rule of law against the potential harms of AI systems while also fostering innovation and supporting the internal market’s functionality.

Emerging Legal Challenges

While the push to make AI systems safe, transparent, traceable, non-discriminatory, and environmentally friendly is highly commendable, it appears inevitable that AI-related disputes will rise globally in the coming years. Courts will face the challenge of applying traditional legal concepts to these emerging technologies.

Regulatory Shifts in the EU

AI is an extremely complex issue, and AI liability is even more so. There are currently no easy fixes for these issues. According to legal experts, the unique characteristics of AI systems pose novel liability questions, and it is unclear whether current regimes will be fit for purpose in compensating for damage suffered when AI systems fail.

The amendments to the EU’s Product Liability Directive (PLD) seek to address some of these issues and bring AI systems into the strict product liability regime. This new legislation expands the scope of claims to cover AI systems and standalone software. However, we are at a very early stage in considering AI and liability. No legal precedents have been set, and it remains to be seen how the courts will interpret the new PLD and apply existing laws such as negligence to questions of liability.

This new legislation will make it easier for consumers to bring claims regarding failing AI systems placed on the market in the EU, and the new presumptions of defect and causation significantly increase liability risk for AI developers and deployers. The opaque nature of this technology means that liability routes do not easily fit within the existing rules, making liability difficult to assess.

The ‘Black Box’ Paradigm

One of the biggest challenges with AI, from a liability perspective, is the ‘black box’ nature of these systems. Opacity raises significant evidential issues when seeking to determine the cause of a malfunction or which party is responsible for damage caused.

Not being able to see how the AI system has come to its decision, continuously learned, or been trained, and whether this can be traced back to the manufacturer or developer, complicates accountability. The new PLD seeks to address this issue, ensuring that it would not be a bar to a consumer claim if they cannot read what is inside the black box.

The presumptions of causation are designed to resolve the black box problem, making it easier for consumers to bring claims when the technical or scientific evidence is excessively difficult or the AI system itself is too complex. If they can demonstrate that the product contributed to the damage and it is likely defective, courts will apply a rebuttable presumption that defendants must disprove.

Strict Liability and Regulatory Frameworks

From a legislative standpoint, there have been significant developments in recent years. In the EU, the AI Act and the new PLD are often described as two sides of the same coin. The regulatory and liability frameworks are closely connected, and any non-compliance with mandatory requirements under the AI Act will likely lead to increased strict liability risks under the new PLD.

Although the liability impact of the EU AI Act is not as comprehensive without the shelved AI Liability Directive, general tort laws will continue to apply. In EU member states, this generally entails that anyone who causes damages by violating a legal obligation is held to compensate for the suffered damages.

The policy framework in the EU essentially obliges operators to reduce the potential for risk from AI. When an issue does arise, there are consequences for the operators, either through regulatory enforcement and fines or civil liability at the suit of the parties harmed.

Approaches to Mitigating AI Risks

As the AI landscape rapidly advances, companies must prepare for potential risks associated with AI failures. To manage and mitigate liability, they can take proactive steps to address pertinent issues.

Three key ways companies can manage liability risks associated with AI failures include:

  1. Contractual Protections: Companies can negotiate contractual promises regarding an AI system’s functionality and seek damages if these promises are broken.
  2. Managing Liability to Customers: In business-to-business contexts, risk allocation is flexible, but pushing the risk of AI failure onto customers depends on the specific context.
  3. Implementing Internal Systems: Companies should reduce AI failure risks and quickly identify issues through internal risk management systems.

Thorough risk assessments covering data privacy concerns, cybersecurity protections and vulnerabilities, algorithm bias, and regulatory compliance are crucial. Identifying high-risk systems under the AI Act and formulating an AI compliance plan is essential for ensuring that regulatory requirements are met.

Anticipating Future Legal Frameworks

As regulators strive to keep up with the evolving AI landscape, companies must take proactive measures to protect themselves. Navigating AI liability will remain challenging, especially as policymakers update product liability laws. The implementation of such technologies may require imposing a duty of care on the deployers of AI tools to prevent damages that cannot be compensated due to opaque responsibilities.

Ultimately, developers, manufacturers, and users will need to collaborate to mitigate liability risks and ensure the safe integration of AI systems.

More Insights

Enhancing AI Safety through Responsible Alignment

The post discusses the development of phi-3-mini in alignment with Microsoft's responsible AI principles, focusing on safety measures such as post-training safety alignment and red-teaming. It...

Mastering Sovereign AI Clouds in Intelligent Manufacturing

Sovereign AI clouds provide essential control and compliance for manufacturers, ensuring that their proprietary data remains secure and localized. As the demand for AI-driven solutions grows, managed...

Empowering Ethical AI in Scotland

The Scottish AI Alliance has released its 2024/2025 Impact Report, showcasing significant progress in promoting ethical and inclusive artificial intelligence across Scotland. The report highlights...

EU AI Act: Embrace Compliance and Prepare for Change

The recent announcement from the EU Commission confirming that there will be no delay to the EU AI Act has sparked significant reactions, with many claiming both failure and victory. Companies are...

Exploring Trustworthiness in Large Language Models Under the EU AI Act

This systematic mapping study evaluates the trustworthiness of large language models (LLMs) in the context of the EU AI Act, highlighting their capabilities and the challenges they face. The research...

EU AI Act Faces Growing Calls for Delay Amid Industry Concerns

The EU has rejected calls for a pause in the implementation of the AI Act, maintaining its original timeline despite pressure from various companies and countries. Swedish Prime Minister Ulf...

Tightening AI Controls: Impacts on Tech Stocks and Data Centers

The Trump administration is preparing to introduce new restrictions on AI chip exports to Malaysia and Thailand to prevent advanced processors from reaching China. These regulations could create...

AI and Data Governance: Building a Trustworthy Future

AI governance and data governance are critical for ensuring ethical and reliable AI solutions in modern enterprises. These frameworks help organizations manage data quality, transparency, and...

BRICS Calls for UN Leadership in AI Regulation

In a significant move, BRICS nations have urged the United Nations to take the lead in establishing global regulations for artificial intelligence (AI). This initiative highlights the growing...