The Risks of Abandoning AI Liability Regulations

Op-ed: Abandoning the AI Liability Directive Brings Unacceptable Risks

Europe’s need to cut red tape is no secret. This issue frequently arises in discussions with businesses, whether they are startups, scale-ups, or established companies. The European Commission has pledged to deliver on this goal, yet doubts persist about the means to achieve it.

The AI Liability Directive (AILD), which the European Commission has decided to abandon, exemplifies this concern. Proponents of this decision argue that additional liability rules could stifle innovation and investment in Europe. However, scrapping the directive could achieve the opposite effect: by leaving companies without clear legal guidelines, the Commission will reduce their incentives to invest.

Legal Uncertainty: A Barrier to AI Innovation in the EU

Investors in Europe are already known for their risk aversion. As AI technologies increasingly interact with both the real and virtual worlds, the risks multiply, and the Commission’s decision adds legal opacity and fragmentation to the mix.

The chains of accountability remain unclear. Who is responsible when risks inevitably materialize—those who develop, deploy, sell, or design? What happens if they share responsibilities among each other? The search for these answers reveals a fragmented legal landscape.

Currently, companies dealing with AI-driven technologies have little idea how innovative the judge facing them might be, nor which of the 27 legal frameworks will confront them.

AILD’s Role in Europe’s Digital Rulebook

Some opponents of the directive argue that there’s no need for further regulation since the AI Act and the new Product Liability Directive (PLD) cover the same ground. This perspective is misguided. Neither the AI Act nor the revised PLD substitutes for the AILD.

The distinction is clear: the AI Act deals with pre-emptive risk management, guiding AI players on how to avoid harm. It does not address who is responsible after harm has occurred. The Product Liability Directive, conversely, covers damages following an incident, but these are different from those addressed by the AILD. The differences between product liability and producer’s liability are well-known and should be recognized by the Commission.

Without AILD, AI Risks Undermining Trust & Safety

AI harms often extend beyond product defects. For instance, what if AI causes damage in a professional context using professional tools? What if the harm arises not from a manufacturing defect but from inadequate user instructions? What if the injury results from a “rogue” AI behavior not rooted in technical fault but in deployment mismanagement?

A growing class of use cases involves programmers using generative AI without apparent defects to create applications that include AI elements. What if such privately used applications cause harm to third parties? Ignoring these scenarios represents not just a legal blind spot but a significant political liability.

The Commission must understand better. By refusing to adopt harmonized AI liability rules, it exposes businesses to a patchwork of national standards and conflicting interpretations, which is detrimental to the acceleration of AI uptake across the continent.

Instead of clarity, we encounter a game of legal roulette. In this case, harmonization does not imply overregulation; it represents smart, targeted, fact-based rules that provide both innovators and consumers with legal certainty.

The current opacity, seeming autonomy, and unpredictability for users complicate the pinpointing of responsibility. The AILD aimed to close these gaps through reasonable, modern tools like disclosure duties and rebuttable presumptions of fault—measures designed for AI’s unique risks.

The Commission’s vague hints about “future legal approaches” offer little comfort. Businesses need legal certainty now, not open-ended promises for the future.

At the heart of this debate lies a broader question: Do we genuinely desire a digital single market in Europe that transcends mere rhetoric? If the answer is affirmative, harmonization is essential and must be grounded in fact. Without it, we risk more fragmentation, not predictability; more confusion, not clarity. With its latest retreat, the Commission isn’t simplifying—it’s surrendering.

More Insights

Hungary’s Biometric Surveillance: A Threat to Rights and EU Law

Hungary's recent amendments to its surveillance laws allow the police to use facial recognition technology for all types of infractions, including minor ones, which poses significant risks to...

Europe Faces Pressure to Abandon AI Regulation Amid U.S. Influence

The Trump administration is urging Europe to abandon a proposed AI rulebook that would impose stricter standards on AI developers. The U.S. government argues that these regulations could unfairly...

Avoiding AI Compliance Pitfalls in the Workplace

In the rapidly evolving landscape of artificial intelligence, organizations must be vigilant about compliance to avoid significant legal and operational pitfalls. This article provides practical...

Mastering AI Governance: Essential Strategies for Brands and Agencies

AI governance is essential for brands and agencies to ensure that artificial intelligence systems are used responsibly, ethically, and effectively. It involves processes and policies that mitigate...

AI Agents: Balancing Innovation with Accountability

Companies across industries are rapidly adopting AI agents, which are generative AI systems designed to act autonomously and make decisions without constant human input. However, the increased...

UAE’s Pioneering Approach to AI Governance

Experts indicate that the United Arab Emirates is experiencing a shift towards institutionalized governance of artificial intelligence. This development aims to ensure that AI technologies are...

US Pushes Back Against EU AI Regulations, Leaving Enterprises to Set Their Own Standards

The US is pushing to eliminate the EU AI Act's code of practice, arguing that it stifles innovation and imposes unnecessary burdens on enterprises. This shift in regulatory responsibility could...

Big Tech’s Vision for AI Regulations in the U.S.

Big Tech companies, AI startups, and financial institutions have expressed their priorities for the U.S. AI Action Plan, emphasizing the need for unified regulations, energy infrastructure, and...

Czechia’s Path to Complying with EU AI Regulations

The European Union's Artificial Intelligence Act introduces significant regulations for the use of AI, particularly in high-risk areas such as critical infrastructure and medical devices. Czechia is...