How The EU AI Act Impacts Medical Device Manufacturers
The EU AI Act (Regulation (EU) 2024/1689) is a landmark legislation that will shape the future of AI in Europe and is expected to set the baseline for similar legislation in other regions. This act will have a significant impact on the AI industry and society, establishing new standards and rules for the development and use of AI systems while creating new opportunities and challenges for innovation and competitiveness.
The Act aims to regulate AI in a manner that balances the benefits and risks of this transformative technology. Particularly, the EU AI Act will affect the medical device industry for devices that incorporate AI technology. Medical device manufacturers must comply with the provisions of the AI Act.
Risk Classifications of AI Systems
All AI systems are classified into four risk categories: unacceptable, high, limited, and minimal.
- Unacceptable: AI systems deemed unacceptable are banned from the market. Examples include untargeted scraping of facial images, emotion recognition in workplaces, social scoring, and biometric categorization to infer sensitive data.
- High-risk: These systems significantly impact people’s lives and rights, such as those used in healthcare, education, and public services. They must comply with strict requirements, including data quality, transparency, and human oversight.
- Limited-risk: AI systems that pose some risk, such as chatbots, must provide clear information to users and allow them the option to opt out.
- Minimal-risk: Systems like spam filters are largely exempt from regulation but must still adhere to general principles of safety and fairness.
Medical devices utilizing AI will typically fall under the high-risk category, requiring oversight by a notified body due to their significant impact on health and safety.
Compliance Timeline and Requirements
The EU AI Act was published on July 12, 2024, and will be enforced by August 2, 2026. Certain provisions will become mandatory by August 2, 2025.
Providers of high-risk AI systems must implement a Quality Management System (QMS) covering the following aspects:
- Risk management to identify and mitigate potential risks during the AI system’s lifecycle.
- Data governance for training, validation, and testing data sets.
- Technical documentation development and maintenance, integrating AI-related documentation with existing medical device documentation.
- Data logging to track AI system data throughout its lifecycle.
- Labeling that provides information on the AI system’s functioning and maintenance.
- A design that ensures appropriate levels of accuracy, safety, and cybersecurity.
- Post-market monitoring, including incident reporting to relevant authorities.
Compliance with the QMS will be required by August 2, 2025, along with the identification of economic operators.
Economic Operators and Their Obligations
The EU AI Act identifies several economic operators involved in the lifecycle of high-risk AI systems, including:
- Providers of non-EU AI systems must appoint an Authorized Representative (AR) in the EU to verify documentation and be the liaison with authorities.
- Importers must ensure conformity with AI regulations and provide their details with the AI system.
- Distributors must verify that AI systems include appropriate instructions and CE marking.
- Users must comply with usage instructions and report any serious incidents or malfunctions.
Most medical device manufacturers will already have their economic operators identified due to existing regulations; however, contracts should be updated to reflect obligations related to the AI Act.
Database and Regulatory Framework
The EU AI Act will establish a European database to register all providers, ARs, and AI systems, alongside a governance structure for oversight and enforcement. A European Artificial Intelligence Board (EAIB) will be formed to provide guidance on AI matters.
To foster innovation, member states will set up AI regulatory sandboxes for developing and testing AI systems in controlled environments. For example, Spain has initiated a pilot sandbox under its State Agency for the Supervision of Artificial Intelligence.
Consequences of Non-Compliance
The EU AI Act introduces a system of sanctions for non-compliance, allowing national authorities to impose administrative fines of up to 4% of the annual worldwide turnover of the provider or user. Authorities can also withdraw, recall, or modify non-compliant AI systems.
Pitfalls to Avoid
Although the AI Act’s requirements appear similar to those for medical devices, manufacturers must be cautious. Notified bodies for medical devices may not be designated for the AI Act, leading to dual inspections and documentation needs.
Manufacturers using AI in devices should assess compliance with the AI Act requirements and ensure they are integrated into their QMS, as the deadlines approach. Complying with ISO 13485 for medical devices alone is insufficient; cybersecurity measures are also necessary.
Conclusion
While the EU AI Act shares similarities with existing medical device regulations, compliance is not guaranteed. Manufacturers should proactively begin their compliance journey to meet the upcoming deadlines of August 2025 and 2026.