Integrating Medical Devices with the EU AI Act: Key Regulatory Insights

Medical Devices and the EU AI Act: A Comprehensive Study

The intersection of medical devices and artificial intelligence (AI) is becoming increasingly significant in the healthcare sector. The EU AI Act provides a framework for how these technologies will coexist and be regulated. This study aims to explore how medical devices that integrate AI systems will be governed under the EU AI Act, highlighting key terms and implications for manufacturers and users.

When is a Medical Device Subject to the AI Act?

All medical devices that incorporate an AI system as defined in the AI Act are subject to its provisions when targeted at the EU market. The term “AI System” is broadly defined, potentially including systems that use big data for predictive analytics, even if they do not constitute what is traditionally considered artificial intelligence.

High-Risk and Limited-Risk AI Systems

Medical devices are categorized as either high-risk or limited-risk under the AI Act. High-risk devices face a stringent regulatory regime that requires notified body certification for both the AI system and the medical device itself.

There are two pathways for a medical device to qualify as a high-risk AI system (HRAIS). The first pathway is if the device is subject to a notified body conformity assessment under existing medical device regulations. In this case, the AI system must be either:

  • Intended as a safety component
  • A standalone product, such as software as a medical device (SaMD)

The second pathway includes devices defined as high-risk under Annex III of the AI Act, such as applications that monitor emotional states through facial recognition technology.

In cases where both definitions apply, both high-risk AI system regulations and Annex III requirements may be enforceable, although exemptions may exist for low-risk declarations.

Medical Device Regulation vs. AI Act

In the realm of medical devices, any software that has clinical applications directly impacting individuals is likely to be regulated under the EU medical device regulations. For high-risk AI systems, manufacturers must ensure compliance with both sets of legislation, requiring a unified declaration of conformity (DoC).

Compliance with the EU GDPR is also essential. Providers of HRAIS must declare their compliance with data protection regulations in their DoC, making non-compliance a potential risk to their device’s market validity.

Requirements for Limited-Risk Medical Devices

Manufacturers of limited-risk medical devices are exempt from the complexities of notified body regulation but must still adhere to additional provisions. Starting from February 2, 2025, manufacturers will need to implement training for staff regarding the operation and use of AI systems. They are also encouraged to comply with industry-specific codes of conduct that will be finalized by May 2, 2025.

Moreover, if a device interacts directly with individuals, the manufacturer must inform users that an AI system is involved unless it is evident. This information should be provided at the first point of interaction.

Obligations for Medical Devices Under Annex III

Devices classified as HRAIS under Annex III face specific obligations, including:

  • Informing individuals of the use of HRAIS when decisions affect them.
  • Reporting serious incidents that violate EU obligations to market surveillance authorities.
  • Documenting assessments if a device is deemed not high-risk and registering it in the EU database.

Provider vs. Manufacturer

The terms provider and manufacturer are used interchangeably between the AI Act and medical device regulations, signifying the primary entity responsible for compliance. However, the AI Act lacks equivalent guidelines to the General Safety and Performance Requirements (GSPRs) found in medical device regulations, creating a need for industry-specific guidance.

Quality Management Systems (QMS)

Medical device manufacturers are accustomed to maintaining a quality management system (QMS) that spans the entire lifecycle of their products. The AI Act allows for the integration of QMS compliance for AI systems with existing medical device regulations, albeit with broader compliance requirements focused on regulatory adherence.

Manufacturers of HRAIS must consider detailed processes concerning data handling and system development as part of their QMS.

Clinical Investigations and Performance Studies

Under the EU medical device regulations, exceptions exist for devices used in authorized clinical investigations. However, the AI Act does not provide similar exceptions for HRAIS, limiting testing options to AI regulatory sandboxes established by competent authorities.

Preparing for Compliance

The combined regulatory burdens of the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) create additional complexities for manufacturers of HRAIS. A thorough mapping exercise is recommended to determine the applicability of the AI Act and to minimize additional compliance work.

As the regulatory landscape evolves, manufacturers should prepare for the additional costs and delays associated with bringing HRAIS to the EU market by the enforcement date of August 2, 2027.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...