European AI Act Compliance: What Medical Device Companies Need to Know

OCT Medical Devices 2025: European AI Act Compliance

The European Union’s AI Act is poised to shape the future of medical devices, particularly those that incorporate artificial intelligence (AI). For healthcare firms involved in software-as-a-medical-device (SaMD) or digital and AI devices, compliance with this regulation should not be viewed as a cause for panic.

Understanding the AI Act

The AI Act was passed by the European Parliament on March 13, 2024, received approval from the EU Council on May 21, 2024, and officially entered into force on August 1, 2024. It serves as a risk-management system aimed at identifying, evaluating, and mitigating the potential risks that AI systems may pose to health, safety, and fundamental rights such as privacy and data protection.

For those with Class IIa or higher AI-related devices, compliance with the AI Act is expected to mirror the requirements of the EU Medical Device Regulation (MDR). This familiarity is crucial for companies already engaged in MDR compliance efforts.

Key Compliance Dates

While various deadlines for compliance exist, the vast majority of medical devices falling under the Act’s Annex I product safety component must be compliant by August 2, 2027. This timeline provides a clear pathway for organizations to prepare for compliance.

Integrating AI Act with MDR

The AI Act is described as an “interesting type of Act” that blends product legislation with human rights-adjacent legislation. It is designed to work in conjunction with the MDR to ensure rights, fairness, and respect for human rights. New transparency requirements under the AI Act relate to safety and fairness—elements that were already present in the MDR.

Compliance Strategies

Companies already preparing AI-based software for the MDR should find that their compliance processes remain largely consistent, albeit with increased emphasis on data transparency. It is advised that organizations prepare early for new standards and guidance, integrating these into their quality management processes as they become available.

Conclusion

As the landscape of medical devices continues to evolve with advancements in AI, understanding and adhering to the AI Act will be essential for compliance. The regulation aims to ensure safety while fostering innovation in the medical device sector. Organizations should view this as an evolution of their existing compliance frameworks rather than a complete overhaul.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...