Integrating Medical Devices with the EU AI Act: Key Regulatory Insights

Medical Devices and the EU AI Act: A Comprehensive Study

The intersection of medical devices and artificial intelligence (AI) is becoming increasingly significant in the healthcare sector. The EU AI Act provides a framework for how these technologies will coexist and be regulated. This study aims to explore how medical devices that integrate AI systems will be governed under the EU AI Act, highlighting key terms and implications for manufacturers and users.

When is a Medical Device Subject to the AI Act?

All medical devices that incorporate an AI system as defined in the AI Act are subject to its provisions when targeted at the EU market. The term “AI System” is broadly defined, potentially including systems that use big data for predictive analytics, even if they do not constitute what is traditionally considered artificial intelligence.

High-Risk and Limited-Risk AI Systems

Medical devices are categorized as either high-risk or limited-risk under the AI Act. High-risk devices face a stringent regulatory regime that requires notified body certification for both the AI system and the medical device itself.

There are two pathways for a medical device to qualify as a high-risk AI system (HRAIS). The first pathway is if the device is subject to a notified body conformity assessment under existing medical device regulations. In this case, the AI system must be either:

  • Intended as a safety component
  • A standalone product, such as software as a medical device (SaMD)

The second pathway includes devices defined as high-risk under Annex III of the AI Act, such as applications that monitor emotional states through facial recognition technology.

In cases where both definitions apply, both high-risk AI system regulations and Annex III requirements may be enforceable, although exemptions may exist for low-risk declarations.

Medical Device Regulation vs. AI Act

In the realm of medical devices, any software that has clinical applications directly impacting individuals is likely to be regulated under the EU medical device regulations. For high-risk AI systems, manufacturers must ensure compliance with both sets of legislation, requiring a unified declaration of conformity (DoC).

Compliance with the EU GDPR is also essential. Providers of HRAIS must declare their compliance with data protection regulations in their DoC, making non-compliance a potential risk to their device’s market validity.

Requirements for Limited-Risk Medical Devices

Manufacturers of limited-risk medical devices are exempt from the complexities of notified body regulation but must still adhere to additional provisions. Starting from February 2, 2025, manufacturers will need to implement training for staff regarding the operation and use of AI systems. They are also encouraged to comply with industry-specific codes of conduct that will be finalized by May 2, 2025.

Moreover, if a device interacts directly with individuals, the manufacturer must inform users that an AI system is involved unless it is evident. This information should be provided at the first point of interaction.

Obligations for Medical Devices Under Annex III

Devices classified as HRAIS under Annex III face specific obligations, including:

  • Informing individuals of the use of HRAIS when decisions affect them.
  • Reporting serious incidents that violate EU obligations to market surveillance authorities.
  • Documenting assessments if a device is deemed not high-risk and registering it in the EU database.

Provider vs. Manufacturer

The terms provider and manufacturer are used interchangeably between the AI Act and medical device regulations, signifying the primary entity responsible for compliance. However, the AI Act lacks equivalent guidelines to the General Safety and Performance Requirements (GSPRs) found in medical device regulations, creating a need for industry-specific guidance.

Quality Management Systems (QMS)

Medical device manufacturers are accustomed to maintaining a quality management system (QMS) that spans the entire lifecycle of their products. The AI Act allows for the integration of QMS compliance for AI systems with existing medical device regulations, albeit with broader compliance requirements focused on regulatory adherence.

Manufacturers of HRAIS must consider detailed processes concerning data handling and system development as part of their QMS.

Clinical Investigations and Performance Studies

Under the EU medical device regulations, exceptions exist for devices used in authorized clinical investigations. However, the AI Act does not provide similar exceptions for HRAIS, limiting testing options to AI regulatory sandboxes established by competent authorities.

Preparing for Compliance

The combined regulatory burdens of the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) create additional complexities for manufacturers of HRAIS. A thorough mapping exercise is recommended to determine the applicability of the AI Act and to minimize additional compliance work.

As the regulatory landscape evolves, manufacturers should prepare for the additional costs and delays associated with bringing HRAIS to the EU market by the enforcement date of August 2, 2027.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...