Integrating Medical Devices with the EU AI Act: Key Regulatory Insights

Medical Devices and the EU AI Act: A Comprehensive Study

The intersection of medical devices and artificial intelligence (AI) is becoming increasingly significant in the healthcare sector. The EU AI Act provides a framework for how these technologies will coexist and be regulated. This study aims to explore how medical devices that integrate AI systems will be governed under the EU AI Act, highlighting key terms and implications for manufacturers and users.

When is a Medical Device Subject to the AI Act?

All medical devices that incorporate an AI system as defined in the AI Act are subject to its provisions when targeted at the EU market. The term “AI System” is broadly defined, potentially including systems that use big data for predictive analytics, even if they do not constitute what is traditionally considered artificial intelligence.

High-Risk and Limited-Risk AI Systems

Medical devices are categorized as either high-risk or limited-risk under the AI Act. High-risk devices face a stringent regulatory regime that requires notified body certification for both the AI system and the medical device itself.

There are two pathways for a medical device to qualify as a high-risk AI system (HRAIS). The first pathway is if the device is subject to a notified body conformity assessment under existing medical device regulations. In this case, the AI system must be either:

  • Intended as a safety component
  • A standalone product, such as software as a medical device (SaMD)

The second pathway includes devices defined as high-risk under Annex III of the AI Act, such as applications that monitor emotional states through facial recognition technology.

In cases where both definitions apply, both high-risk AI system regulations and Annex III requirements may be enforceable, although exemptions may exist for low-risk declarations.

Medical Device Regulation vs. AI Act

In the realm of medical devices, any software that has clinical applications directly impacting individuals is likely to be regulated under the EU medical device regulations. For high-risk AI systems, manufacturers must ensure compliance with both sets of legislation, requiring a unified declaration of conformity (DoC).

Compliance with the EU GDPR is also essential. Providers of HRAIS must declare their compliance with data protection regulations in their DoC, making non-compliance a potential risk to their device’s market validity.

Requirements for Limited-Risk Medical Devices

Manufacturers of limited-risk medical devices are exempt from the complexities of notified body regulation but must still adhere to additional provisions. Starting from February 2, 2025, manufacturers will need to implement training for staff regarding the operation and use of AI systems. They are also encouraged to comply with industry-specific codes of conduct that will be finalized by May 2, 2025.

Moreover, if a device interacts directly with individuals, the manufacturer must inform users that an AI system is involved unless it is evident. This information should be provided at the first point of interaction.

Obligations for Medical Devices Under Annex III

Devices classified as HRAIS under Annex III face specific obligations, including:

  • Informing individuals of the use of HRAIS when decisions affect them.
  • Reporting serious incidents that violate EU obligations to market surveillance authorities.
  • Documenting assessments if a device is deemed not high-risk and registering it in the EU database.

Provider vs. Manufacturer

The terms provider and manufacturer are used interchangeably between the AI Act and medical device regulations, signifying the primary entity responsible for compliance. However, the AI Act lacks equivalent guidelines to the General Safety and Performance Requirements (GSPRs) found in medical device regulations, creating a need for industry-specific guidance.

Quality Management Systems (QMS)

Medical device manufacturers are accustomed to maintaining a quality management system (QMS) that spans the entire lifecycle of their products. The AI Act allows for the integration of QMS compliance for AI systems with existing medical device regulations, albeit with broader compliance requirements focused on regulatory adherence.

Manufacturers of HRAIS must consider detailed processes concerning data handling and system development as part of their QMS.

Clinical Investigations and Performance Studies

Under the EU medical device regulations, exceptions exist for devices used in authorized clinical investigations. However, the AI Act does not provide similar exceptions for HRAIS, limiting testing options to AI regulatory sandboxes established by competent authorities.

Preparing for Compliance

The combined regulatory burdens of the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR) create additional complexities for manufacturers of HRAIS. A thorough mapping exercise is recommended to determine the applicability of the AI Act and to minimize additional compliance work.

As the regulatory landscape evolves, manufacturers should prepare for the additional costs and delays associated with bringing HRAIS to the EU market by the enforcement date of August 2, 2027.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...