EU Considers AI Liability Directive: What’s Next for Developers?

EU Legislation After the AI Act: Brussels Considers New Liability Rules

The EU has initiated a consultation regarding a proposed AI Liability Directive (AILD), aimed at establishing a liability regime for AI developers and users. This legislative proposal follows the implementation of the EU’s AI Act, which has now come into force, prompting lawmakers to shift focus towards new regulations governing artificial intelligence technology.

Key Takeaways

  • The AILD aims to create a unified civil liability framework across the EU for damages caused by AI systems.
  • Consultation will explore the necessity of the directive and whether it should mandate AI liability insurance.

Does the EU Need More AI Rules?

Initially, AI was regulated by a fragmented set of EU and national laws, which often addressed different technological contexts. The adoption of the AI Act has harmonized this regulatory framework significantly.

In 2024, the EU updated the Product Liability Directive (PLD) to reflect the digital age by incorporating a section specifically for “software and AI systems,” clarifying developers’ responsibilities towards users. However, some stakeholders argue that the updated PLD sufficiently addresses all potential AI liability scenarios, rendering the AILD unnecessary.

Despite these claims, a recent impact assessment by the European Parliament highlights several unresolved issues with the revised PLD. Notably, the PLD currently only establishes liability for professional or corporate AI users, creating a regulatory gap for non-professional AI users. Moreover, the PLD limits the range of damages eligible for compensation, resulting in significant loopholes.

Unaddressed Areas

The PLD does not cover instances where AI may lead to discriminatory outcomes, breaches of individuals’ rights to privacy and dignity, or issues related to the environmental and climate impact of AI systems.

Lawmakers Consider Mandating AI Liability Insurance

One of the significant discussions in the AILD consultation is whether the directive should require operators of high-risk AI systems to hold liability insurance. The European Commission asserts that such insurance should be mandatory for all high-risk AI operators, potentially imposing considerable costs on companies developing technologies classified as high-risk under Article 6 of the AI Act.

For instance, developers of biometric surveillance tools may be required to insure against risks associated with false identification.

However, the consultation raises questions about whether insurers possess sufficient data to effectively underwrite AI risks. A report from Deloitte indicates a global shortage of AI-related insurance products, attributing this to the lack of historical data on AI model performance and the rapid evolution of AI technology.

Next Steps for the AI Liability Directive

Following the six-week consultation period, the European Parliament’s AILD rapporteur, Axel Voss, is set to report findings in June. A subsequent twelve-week consultation will follow, leading to negotiations between the Parliament and the European Commission from September to December.

It is anticipated that the Committee on Legal Affairs will vote on the final document in January 2026, with a final plenary session scheduled for February.

This ongoing legislative process underscores the EU’s commitment to establishing a comprehensive framework for AI regulation, addressing the emerging challenges posed by rapidly advancing technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...