EU Considers AI Liability Directive: What’s Next for Developers?

EU Legislation After the AI Act: Brussels Considers New Liability Rules

The EU has initiated a consultation regarding a proposed AI Liability Directive (AILD), aimed at establishing a liability regime for AI developers and users. This legislative proposal follows the implementation of the EU’s AI Act, which has now come into force, prompting lawmakers to shift focus towards new regulations governing artificial intelligence technology.

Key Takeaways

  • The AILD aims to create a unified civil liability framework across the EU for damages caused by AI systems.
  • Consultation will explore the necessity of the directive and whether it should mandate AI liability insurance.

Does the EU Need More AI Rules?

Initially, AI was regulated by a fragmented set of EU and national laws, which often addressed different technological contexts. The adoption of the AI Act has harmonized this regulatory framework significantly.

In 2024, the EU updated the Product Liability Directive (PLD) to reflect the digital age by incorporating a section specifically for “software and AI systems,” clarifying developers’ responsibilities towards users. However, some stakeholders argue that the updated PLD sufficiently addresses all potential AI liability scenarios, rendering the AILD unnecessary.

Despite these claims, a recent impact assessment by the European Parliament highlights several unresolved issues with the revised PLD. Notably, the PLD currently only establishes liability for professional or corporate AI users, creating a regulatory gap for non-professional AI users. Moreover, the PLD limits the range of damages eligible for compensation, resulting in significant loopholes.

Unaddressed Areas

The PLD does not cover instances where AI may lead to discriminatory outcomes, breaches of individuals’ rights to privacy and dignity, or issues related to the environmental and climate impact of AI systems.

Lawmakers Consider Mandating AI Liability Insurance

One of the significant discussions in the AILD consultation is whether the directive should require operators of high-risk AI systems to hold liability insurance. The European Commission asserts that such insurance should be mandatory for all high-risk AI operators, potentially imposing considerable costs on companies developing technologies classified as high-risk under Article 6 of the AI Act.

For instance, developers of biometric surveillance tools may be required to insure against risks associated with false identification.

However, the consultation raises questions about whether insurers possess sufficient data to effectively underwrite AI risks. A report from Deloitte indicates a global shortage of AI-related insurance products, attributing this to the lack of historical data on AI model performance and the rapid evolution of AI technology.

Next Steps for the AI Liability Directive

Following the six-week consultation period, the European Parliament’s AILD rapporteur, Axel Voss, is set to report findings in June. A subsequent twelve-week consultation will follow, leading to negotiations between the Parliament and the European Commission from September to December.

It is anticipated that the Committee on Legal Affairs will vote on the final document in January 2026, with a final plenary session scheduled for February.

This ongoing legislative process underscores the EU’s commitment to establishing a comprehensive framework for AI regulation, addressing the emerging challenges posed by rapidly advancing technologies.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...