Integrating Medical Devices with the EU AI Act: Key Regulatory Insights

Medical Devices and the EU AI Act: A Comprehensive Study

The intersection of medical devices and artificial intelligence (AI) is becoming increasingly significant in the healthcare sector. The EU AI Act provides a framework for how these technologies will coexist and be regulated. This study aims to explore how medical devices that integrate AI systems will be governed under the EU AI Act, highlighting key terms and implications for manufacturers and users.

When is a Medical Device Subject to the AI Act?

All medical devices that incorporate an AI system as defined in the AI Act are subject to its provisions when targeted at the EU market. The term “AI System” is broadly defined, potentially including systems that use big data for predictive analytics, even if they do not constitute what is traditionally considered artificial intelligence.

High-Risk and Limited-Risk AI Systems

Medical devices are categorized as either high-risk or limited-risk under the AI Act. High-risk devices face a stringent regulatory regime that requires notified body certification for both the AI system and the medical device itself.

There are two pathways for a medical device to qualify as a high-risk AI system (HRAIS). The first pathway is if the device is subject to a notified body conformity assessment under existing medical device regulations. In this case, the AI system must be either:

  • Intended as a safety component
  • A standalone product, such as software as a medical device (SaMD)

The second pathway includes devices defined as high-risk under Annex III of the AI Act, such as applications that monitor emotional states through facial recognition technology.

In cases where both definitions apply, both high-risk AI system regulations and Annex III requirements may be enforceable, although exemptions may exist for low-risk declarations.

Medical Device Regulation vs. AI Act

In the realm of medical devices, any software that has clinical applications directly impacting individuals is likely to be regulated under the EU medical device regulations. For high-risk AI systems, manufacturers must ensure compliance with both sets of legislation, requiring a unified declaration of conformity (DoC).

Compliance with the EU GDPR is also essential. Providers of HRAIS must declare their compliance with data protection regulations in their DoC, making non-compliance a potential risk to their device’s market validity.

Requirements for Limited-Risk Medical Devices

Manufacturers of limited-risk medical devices are exempt from the complexities of notified body regulation but must still adhere to additional provisions. Starting from February 2, 2025, manufacturers will need to implement training for staff regarding the operation and use of AI systems. They are also encouraged to comply with industry-specific codes of conduct that will be finalized by May 2, 2025.

Moreover, if a device interacts directly with individuals, the manufacturer must inform users that an AI system is involved unless it is evident. This information should be provided at the first point of interaction.

Obligations for Medical Devices Under Annex III

Devices classified as HRAIS under Annex III face specific obligations, including:

  • Informing individuals of the use of HRAIS when decisions affect them.
  • Reporting serious incidents that violate EU obligations to market surveillance authorities.
  • Documenting assessments if a device is deemed not high-risk and registering it in the EU database.

Provider vs. Manufacturer

The terms provider and manufacturer are used interchangeably between the AI Act and medical device regulations, signifying the primary entity responsible for compliance. However, the AI Act lacks equivalent guidelines to the General Safety and Performance Requirements (GSPRs) found in medical device regulations, creating a need for industry-specific guidance.

Quality Management Systems (QMS)

Medical device manufacturers are accustomed to maintaining a quality management system (QMS) that spans the entire lifecycle of their products. The AI Act allows for the integration of QMS compliance for AI systems with existing medical device regulations, albeit with broader compliance requirements focused on regulatory adherence.

Manufacturers of HRAIS must consider detailed processes concerning data handling and system development as part of their QMS.

Clinical Investigations and Performance Studies

Under the EU medical device regulations, exceptions exist for devices used in authorized clinical investigations. However, the AI Act does not provide similar exceptions for HRAIS, limiting testing options to AI regulatory sandboxes established by competent authorities.

Preparing for Compliance

The combined regulatory burdens of the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) create additional complexities for manufacturers of HRAIS. A thorough mapping exercise is recommended to determine the applicability of the AI Act and to minimize additional compliance work.

As the regulatory landscape evolves, manufacturers should prepare for the additional costs and delays associated with bringing HRAIS to the EU market by the enforcement date of August 2, 2027.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...