Nouvelle ère pour la responsabilité des produits et de l’IA

A padlock

A New Liability Framework for Products and AI

The European Union (EU) has taken a significant step in addressing the unique challenges posed by artificial intelligence (AI) through the implementation of the landmark Artificial Intelligence Act (AI Act), which entered into force on August 1, 2024. This legislation marks the first comprehensive legal framework for AI, prompting a closer examination of the EU’s new liability rules designed to safeguard consumers and businesses alike.

The New EU Product Liability Directive

One of the key components of this new framework is the EU Product Liability Directive (EU) 2024/2853 (PLD), which replaces its nearly 40-year-old predecessor. The new PLD imposes strict “no-fault” liability on manufacturers, suppliers, and other entities for defective products. This directive was published in the EU Official Journal on November 18, 2024, and came into force on December 8, 2024. Member States have until December 9, 2026 to implement the new PLD into their national laws.

As AI and digital technologies become increasingly integral to daily life, the potential for consumer and business harm from these technologies continues to rise. The EU’s new liability rules encompass products that integrate software and AI, ensuring users have a legal avenue to seek compensation for damages incurred as a result of these technologies.

Key Takeaways of the New PLD

The new PLD introduces several important points:

  • Products released in the EU market after December 9, 2026 will adhere to the new PLD, while those released prior will follow the existing regulations.
  • The directive aims to simplify the process for claimants suffering injury or loss from defective products, enabling easier claims against a wider range of entities, including manufacturers and online platforms.
  • Software and digital service providers will now face increased product liability risks.
  • Pharmaceutical and medical device manufacturers will likely be among the first to encounter significant legal challenges under the new regime.
  • Insurers must adapt to the implications of these new liability rules, engaging with policyholders to ensure compliance with product safety laws.

Key Provisions of the New PLD

While retaining many features from the existing regime, the new PLD brings forth transformative provisions:

  • Expanded definition of “product” to include digital manufacturing files and standalone software, including AI.
  • Expanded potential defendants now include providers of software and digital services, as well as online marketplaces.
  • Expanded definition of “damage” includes destruction or corruption of data and medically certified psychological injury, thus expanding liability risks.
  • New circumstances relevant to safety must be considered when determining product defectiveness, affecting how businesses navigate compliance.
  • New disclosure obligations require defendants to provide necessary evidence upon a plausible claim, with non-compliance leading to a rebuttable presumption of defect.
  • Extended limitation period allows claimants to bring latent personal injury claims within 25 years of product release.
  • Rebuttable presumptions of defect shift the burden to defendants in complex cases, making it easier for claimants to prove their cases.

Implications for Businesses

As the new PLD is set to reshape the landscape of product liability, businesses must prepare for potential impacts:

  • Compliance checks are crucial; businesses should conduct regular audits of documentation and quality management systems.
  • Ensuring adequate product labeling to describe risks and warnings accurately is essential for compliance.
  • Preparing for larger disclosure exercises is necessary, especially in jurisdictions where disclosure is typically limited.
  • Businesses should ensure they possess adequate insurance to cover potential claims arising from latent defects.

The Proposed AI Liability Directive

Alongside the new PLD, the AI Liability Directive (AILD) was introduced to establish harmonized, fault-based rules for damages caused by AI systems. While the new PLD provides a strict liability regime, the AILD allows claimants to bring non-contractual fault-based claims, addressing the complexities of AI.

Key Features of the AILD

The AILD proposes notable features aimed at streamlining the claims process:

  • A rebuttable presumption of causality will aid claimants in demonstrating the causal link between AI system failures and resulting harm.
  • A right of access to evidence from AI providers or users of high-risk AI systems will facilitate claims by allowing claimants to obtain necessary documentation.

Conclusion

The EU’s new liability framework for products and AI represents a vital shift in regulatory approach, responding to the burgeoning role of technology in society. As businesses prepare for these changes, understanding both the new PLD and the proposed AILD will be essential for navigating the evolving legal landscape.

Articles

L’EU AI Act et l’avenir des drones

Cet article examine l'impact de la loi sur l'IA de l'UE sur l'utilisation des drones. Il met en lumière les implications réglementaires et les défis auxquels les entreprises doivent faire face dans ce...

L’EU AI Act et l’avenir des drones

Cet article examine l'impact de la loi sur l'IA de l'UE sur l'utilisation des drones. Il met en lumière les implications réglementaires et les défis auxquels les entreprises doivent faire face dans ce...

L’importance incontournable de l’IA responsable

Les entreprises sont conscientes de la nécessité d'une IA responsable, mais beaucoup la considèrent comme une réflexion après coup. En intégrant des pratiques de données fiables dès le départ, les...

Modèle de gouvernance AI : mettez fin à l’ère du Shadow IT

Les outils d'intelligence artificielle (IA) se répandent rapidement dans les lieux de travail, transformant la façon dont les tâches quotidiennes sont effectuées. Les organisations doivent établir des...

L’UE accorde un délai aux entreprises pour se conformer aux règles de l’IA

L'UE prévoit de retarder l'application des règles à haut risque de la loi sur l'IA jusqu'à fin 2027, afin de donner aux entreprises plus de temps pour se conformer. Les groupes de défense des droits...

Tensions autour des restrictions sur les exportations de puces AI et le GAIN AI Act

La Maison Blanche s'oppose au GAIN AI Act, qui vise à donner la priorité aux entreprises américaines pour l'achat de puces AI avancées avant leur vente à des pays étrangers. Cette mesure met en...

Défis de l’IA : Les experts appellent à des réformes pour l’industrie medtech en Europe

Un panel d'experts a exprimé des inquiétudes concernant la législation récemment adoptée sur l'intelligence artificielle (IA) de l'UE, affirmant qu'elle représente un fardeau significatif pour les...

Innover responsablement grâce à l’IA éthique

Les entreprises cherchent à innover avec l'intelligence artificielle, mais souvent sans les garde-fous nécessaires. En intégrant la conformité et l'éthique dans le développement technologique, elles...

Risques cachés de conformité liés à l’IA dans le recrutement

L'intelligence artificielle transforme la façon dont les employeurs recrutent et évaluent les talents, mais elle introduit également des risques juridiques importants en vertu des lois fédérales sur...