Reviving the AI Liability Directive: Challenges and Prospects

The Future of the AI Liability Directive

The European Commission first published a proposal for an AI Liability Directive (“AILD”) in September 2022 as part of a broader set of initiatives, including proposals for a new Product Liability Directive (“new PLD”) and the EU AI Act. The AILD was designed to introduce uniform rules for certain aspects of non-contractual civil claims related to AI, incorporating disclosure requirements and rebuttable presumptions to streamline the process.

However, unlike the new PLD and EU AI Act, which have both been adopted and are currently in force, the AILD has faced significant stagnation and resistance during its legislative journey.

Recent Developments

On January 21, 2025, the AILD seemed to gain renewed momentum when its rapporteur, Axel Voss, announced a timetable aimed at adopting the AILD by February 2026. This schedule included key dates for consultations, the drafting of a report, amendments, negotiations, and voting. In line with this timetable, a six-week stakeholder consultation commenced on February 3, 2025.

Nevertheless, just days later, on February 11, 2025, the European Commission published its 2025 work programme, which included plans to withdraw the AILD, citing “No foreseeable agreement” on the proposal as the reason. As of now, while the AILD has not been officially withdrawn, the Commission is set to notify the European Parliament and the Council of the European Union of its intentions regarding the withdrawal.

Opposition and Uncertainty

In the Council of the European Union, the AILD has encountered strong opposition from a coalition of countries, which may diminish the likelihood of continued efforts to advance the proposal. In the European Parliament, Voss has criticized the withdrawal plan; however, several influential lawmakers within the European People’s Party (EPP), to which Voss belongs, have also expressed concerns about the AILD.

Despite the AILD not being officially withdrawn and the possibility of revival remaining, the political opposition surrounding it has cast a shadow over its future. Observers will continue to monitor developments closely and provide updates as the situation evolves.

This overview highlights the complexities and challenges surrounding the AILD, emphasizing the need for clarity and consensus in the evolving landscape of AI regulation.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...