Impact of the EU AI Act on Medtech Innovation
The EU AI Act, which came into force on August 1, 2024, establishes specific regulations for artificial intelligence (AI) systems, particularly those categorized as “high risk.” This legislation has significant implications for the medtech industry, which often integrates AI components into devices, products, and services such as diagnostic tools, surgical robotics, and personalized treatment plans.
High-Risk Classification
AI systems that are part of medical devices, especially those used for diagnosis, monitoring, and treatment, are likely to be classified as high risk. This classification entails adhering to stringent requirements related to safety, transparency, and risk management. Companies must conduct thorough evaluations to ensure compliance before obtaining the EU CE mark.
Conformity Assessments
Medtech products classified as high risk will require third-party conformity assessments to confirm adherence to the EU AI Act and relevant legislation such as the Medical Devices Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR). The act aims to implement a coordinated approach, allowing for a single assessment under both the EU AI Act and the MDR/IVDR, although complexities may arise due to the dual regulatory environment.
Transparency Requirements
Mandatory transparency measures will be enforced for high-risk AI systems in medtech. This includes providing clear documentation on how AI systems make decisions, ensuring that both medical professionals and patients can understand the system’s outputs.
Risk Management
Medtech companies utilizing high-risk AI systems will be required to establish robust risk management systems to identify and mitigate potential risks associated with AI in healthcare. Ongoing monitoring of the system post-deployment is essential to prevent or minimize harm.
Human Oversight
High-risk AI systems must incorporate mechanisms for human oversight, enabling healthcare professionals to audit and adjust clinical decisions that have implications for patient health.
Logging, Accuracy, and Cybersecurity
AI systems categorized as high risk must automatically generate logs of events throughout their operational lifetime. They must also achieve an appropriate level of accuracy and robustness to errors, alongside meeting necessary cybersecurity standards.
Deployment Obligations
The act places responsibilities on the deployers of AI systems, such as hospitals and clinicians, to ensure the AI is used as instructed, maintain oversight by trained personnel, and conduct monitoring and surveillance. This will impact relationships across the entire AI contractual chain, involving providers, distributors, and healthcare organizations.
Conclusion
All medtech companies implementing AI systems must ensure adequate levels of AI literacy within their organizations. The supervisory authorities are expected to provide guidelines that align with the act’s obligations, integrating EU fundamental rights. Compliance deadlines will be phased in, with most obligations coming into effect on August 2, 2026, allowing existing technologies a grace period until significant design changes are made.