The Risks of Abandoning AI Liability Regulations

Op-ed: Abandoning the AI Liability Directive Brings Unacceptable Risks

Europe’s need to cut red tape is no secret. This issue frequently arises in discussions with businesses, whether they are startups, scale-ups, or established companies. The European Commission has pledged to deliver on this goal, yet doubts persist about the means to achieve it.

The AI Liability Directive (AILD), which the European Commission has decided to abandon, exemplifies this concern. Proponents of this decision argue that additional liability rules could stifle innovation and investment in Europe. However, scrapping the directive could achieve the opposite effect: by leaving companies without clear legal guidelines, the Commission will reduce their incentives to invest.

Legal Uncertainty: A Barrier to AI Innovation in the EU

Investors in Europe are already known for their risk aversion. As AI technologies increasingly interact with both the real and virtual worlds, the risks multiply, and the Commission’s decision adds legal opacity and fragmentation to the mix.

The chains of accountability remain unclear. Who is responsible when risks inevitably materialize—those who develop, deploy, sell, or design? What happens if they share responsibilities among each other? The search for these answers reveals a fragmented legal landscape.

Currently, companies dealing with AI-driven technologies have little idea how innovative the judge facing them might be, nor which of the 27 legal frameworks will confront them.

AILD’s Role in Europe’s Digital Rulebook

Some opponents of the directive argue that there’s no need for further regulation since the AI Act and the new Product Liability Directive (PLD) cover the same ground. This perspective is misguided. Neither the AI Act nor the revised PLD substitutes for the AILD.

The distinction is clear: the AI Act deals with pre-emptive risk management, guiding AI players on how to avoid harm. It does not address who is responsible after harm has occurred. The Product Liability Directive, conversely, covers damages following an incident, but these are different from those addressed by the AILD. The differences between product liability and producer’s liability are well-known and should be recognized by the Commission.

Without AILD, AI Risks Undermining Trust & Safety

AI harms often extend beyond product defects. For instance, what if AI causes damage in a professional context using professional tools? What if the harm arises not from a manufacturing defect but from inadequate user instructions? What if the injury results from a “rogue” AI behavior not rooted in technical fault but in deployment mismanagement?

A growing class of use cases involves programmers using generative AI without apparent defects to create applications that include AI elements. What if such privately used applications cause harm to third parties? Ignoring these scenarios represents not just a legal blind spot but a significant political liability.

The Commission must understand better. By refusing to adopt harmonized AI liability rules, it exposes businesses to a patchwork of national standards and conflicting interpretations, which is detrimental to the acceleration of AI uptake across the continent.

Instead of clarity, we encounter a game of legal roulette. In this case, harmonization does not imply overregulation; it represents smart, targeted, fact-based rules that provide both innovators and consumers with legal certainty.

The current opacity, seeming autonomy, and unpredictability for users complicate the pinpointing of responsibility. The AILD aimed to close these gaps through reasonable, modern tools like disclosure duties and rebuttable presumptions of fault—measures designed for AI’s unique risks.

The Commission’s vague hints about “future legal approaches” offer little comfort. Businesses need legal certainty now, not open-ended promises for the future.

At the heart of this debate lies a broader question: Do we genuinely desire a digital single market in Europe that transcends mere rhetoric? If the answer is affirmative, harmonization is essential and must be grounded in fact. Without it, we risk more fragmentation, not predictability; more confusion, not clarity. With its latest retreat, the Commission isn’t simplifying—it’s surrendering.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...