Nouvelle ère pour la responsabilité des produits et de l’IA

A padlock

A New Liability Framework for Products and AI

The European Union (EU) has taken a significant step in addressing the unique challenges posed by artificial intelligence (AI) through the implementation of the landmark Artificial Intelligence Act (AI Act), which entered into force on August 1, 2024. This legislation marks the first comprehensive legal framework for AI, prompting a closer examination of the EU’s new liability rules designed to safeguard consumers and businesses alike.

The New EU Product Liability Directive

One of the key components of this new framework is the EU Product Liability Directive (EU) 2024/2853 (PLD), which replaces its nearly 40-year-old predecessor. The new PLD imposes strict “no-fault” liability on manufacturers, suppliers, and other entities for defective products. This directive was published in the EU Official Journal on November 18, 2024, and came into force on December 8, 2024. Member States have until December 9, 2026 to implement the new PLD into their national laws.

As AI and digital technologies become increasingly integral to daily life, the potential for consumer and business harm from these technologies continues to rise. The EU’s new liability rules encompass products that integrate software and AI, ensuring users have a legal avenue to seek compensation for damages incurred as a result of these technologies.

Key Takeaways of the New PLD

The new PLD introduces several important points:

  • Products released in the EU market after December 9, 2026 will adhere to the new PLD, while those released prior will follow the existing regulations.
  • The directive aims to simplify the process for claimants suffering injury or loss from defective products, enabling easier claims against a wider range of entities, including manufacturers and online platforms.
  • Software and digital service providers will now face increased product liability risks.
  • Pharmaceutical and medical device manufacturers will likely be among the first to encounter significant legal challenges under the new regime.
  • Insurers must adapt to the implications of these new liability rules, engaging with policyholders to ensure compliance with product safety laws.

Key Provisions of the New PLD

While retaining many features from the existing regime, the new PLD brings forth transformative provisions:

  • Expanded definition of “product” to include digital manufacturing files and standalone software, including AI.
  • Expanded potential defendants now include providers of software and digital services, as well as online marketplaces.
  • Expanded definition of “damage” includes destruction or corruption of data and medically certified psychological injury, thus expanding liability risks.
  • New circumstances relevant to safety must be considered when determining product defectiveness, affecting how businesses navigate compliance.
  • New disclosure obligations require defendants to provide necessary evidence upon a plausible claim, with non-compliance leading to a rebuttable presumption of defect.
  • Extended limitation period allows claimants to bring latent personal injury claims within 25 years of product release.
  • Rebuttable presumptions of defect shift the burden to defendants in complex cases, making it easier for claimants to prove their cases.

Implications for Businesses

As the new PLD is set to reshape the landscape of product liability, businesses must prepare for potential impacts:

  • Compliance checks are crucial; businesses should conduct regular audits of documentation and quality management systems.
  • Ensuring adequate product labeling to describe risks and warnings accurately is essential for compliance.
  • Preparing for larger disclosure exercises is necessary, especially in jurisdictions where disclosure is typically limited.
  • Businesses should ensure they possess adequate insurance to cover potential claims arising from latent defects.

The Proposed AI Liability Directive

Alongside the new PLD, the AI Liability Directive (AILD) was introduced to establish harmonized, fault-based rules for damages caused by AI systems. While the new PLD provides a strict liability regime, the AILD allows claimants to bring non-contractual fault-based claims, addressing the complexities of AI.

Key Features of the AILD

The AILD proposes notable features aimed at streamlining the claims process:

  • A rebuttable presumption of causality will aid claimants in demonstrating the causal link between AI system failures and resulting harm.
  • A right of access to evidence from AI providers or users of high-risk AI systems will facilitate claims by allowing claimants to obtain necessary documentation.

Conclusion

The EU’s new liability framework for products and AI represents a vital shift in regulatory approach, responding to the burgeoning role of technology in society. As businesses prepare for these changes, understanding both the new PLD and the proposed AILD will be essential for navigating the evolving legal landscape.

Articles

Réglementations AI : L’Acte historique de l’UE face aux garde-fous australiens

Les entreprises mondiales adoptant l'intelligence artificielle doivent comprendre les réglementations internationales sur l'IA. L'Union européenne et l'Australie ont adopté des approches différentes...

Politique AI du Québec : Vers une éducation supérieure responsable

Le gouvernement du Québec a enfin publié une politique sur l'IA pour les universités et les CÉGEPs, presque trois ans après le lancement de ChatGPT. Bien que des préoccupations subsistent quant à la...

L’alphabétisation en IA : un nouveau défi de conformité pour les entreprises

L'adoption de l'IA dans les entreprises connaît une accélération rapide, mais cela pose un défi en matière de compréhension des outils. La loi sur l'IA de l'UE exige désormais que tout le personnel, y...

L’Allemagne se prépare à appliquer la loi sur l’IA pour stimuler l’innovation

Les régulateurs existants seront responsables de la surveillance de la conformité des entreprises allemandes avec la loi sur l'IA de l'UE, avec un rôle renforcé pour l'Agence fédérale des réseaux...

Urgence d’une régulation mondiale de l’IA d’ici 2026

Des dirigeants mondiaux et des pionniers de l'IA appellent l'ONU à établir des sauvegardes mondiales contraignantes pour l'IA d'ici 2026. Cette initiative vise à garantir la sécurité et l'éthique dans...

Gouvernance de l’IA dans une économie de confiance zéro

En 2025, la gouvernance de l'IA doit s'aligner avec les principes d'une économie de zéro confiance, garantissant que les systèmes d'IA sont responsables et transparents. Cela permet aux entreprises de...

Un nouveau cadre de gouvernance pour l’IA : vers un secrétariat technique

Le prochain cadre de gouvernance sur l'intelligence artificielle pourrait comporter un "secrétariat technique" pour coordonner les politiques de l'IA entre les départements gouvernementaux. Cela...

Innovations durables grâce à la sécurité de l’IA dans les pays du Global Majority

L'article discute de l'importance de la sécurité et de la sûreté de l'IA pour favoriser l'innovation dans les pays de la majorité mondiale. Il souligne que ces investissements ne sont pas des...

Vers une gouvernance de l’IA cohérente pour l’ASEAN

L'ASEAN adopte une approche de gouvernance de l'IA fondée sur des principes volontaires, cherchant à équilibrer l'innovation et la réglementation tout en tenant compte de la diversité des États...