The Risks of Abandoning AI Liability Regulations

Op-ed: Abandoning the AI Liability Directive Brings Unacceptable Risks

Europe’s need to cut red tape is no secret. This issue frequently arises in discussions with businesses, whether they are startups, scale-ups, or established companies. The European Commission has pledged to deliver on this goal, yet doubts persist about the means to achieve it.

The AI Liability Directive (AILD), which the European Commission has decided to abandon, exemplifies this concern. Proponents of this decision argue that additional liability rules could stifle innovation and investment in Europe. However, scrapping the directive could achieve the opposite effect: by leaving companies without clear legal guidelines, the Commission will reduce their incentives to invest.

Legal Uncertainty: A Barrier to AI Innovation in the EU

Investors in Europe are already known for their risk aversion. As AI technologies increasingly interact with both the real and virtual worlds, the risks multiply, and the Commission’s decision adds legal opacity and fragmentation to the mix.

The chains of accountability remain unclear. Who is responsible when risks inevitably materialize—those who develop, deploy, sell, or design? What happens if they share responsibilities among each other? The search for these answers reveals a fragmented legal landscape.

Currently, companies dealing with AI-driven technologies have little idea how innovative the judge facing them might be, nor which of the 27 legal frameworks will confront them.

AILD’s Role in Europe’s Digital Rulebook

Some opponents of the directive argue that there’s no need for further regulation since the AI Act and the new Product Liability Directive (PLD) cover the same ground. This perspective is misguided. Neither the AI Act nor the revised PLD substitutes for the AILD.

The distinction is clear: the AI Act deals with pre-emptive risk management, guiding AI players on how to avoid harm. It does not address who is responsible after harm has occurred. The Product Liability Directive, conversely, covers damages following an incident, but these are different from those addressed by the AILD. The differences between product liability and producer’s liability are well-known and should be recognized by the Commission.

Without AILD, AI Risks Undermining Trust & Safety

AI harms often extend beyond product defects. For instance, what if AI causes damage in a professional context using professional tools? What if the harm arises not from a manufacturing defect but from inadequate user instructions? What if the injury results from a “rogue” AI behavior not rooted in technical fault but in deployment mismanagement?

A growing class of use cases involves programmers using generative AI without apparent defects to create applications that include AI elements. What if such privately used applications cause harm to third parties? Ignoring these scenarios represents not just a legal blind spot but a significant political liability.

The Commission must understand better. By refusing to adopt harmonized AI liability rules, it exposes businesses to a patchwork of national standards and conflicting interpretations, which is detrimental to the acceleration of AI uptake across the continent.

Instead of clarity, we encounter a game of legal roulette. In this case, harmonization does not imply overregulation; it represents smart, targeted, fact-based rules that provide both innovators and consumers with legal certainty.

The current opacity, seeming autonomy, and unpredictability for users complicate the pinpointing of responsibility. The AILD aimed to close these gaps through reasonable, modern tools like disclosure duties and rebuttable presumptions of fault—measures designed for AI’s unique risks.

The Commission’s vague hints about “future legal approaches” offer little comfort. Businesses need legal certainty now, not open-ended promises for the future.

At the heart of this debate lies a broader question: Do we genuinely desire a digital single market in Europe that transcends mere rhetoric? If the answer is affirmative, harmonization is essential and must be grounded in fact. Without it, we risk more fragmentation, not predictability; more confusion, not clarity. With its latest retreat, the Commission isn’t simplifying—it’s surrendering.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...