The Regulation of Clinical Trials Involving AI
As the integration of artificial intelligence (AI) into clinical trials becomes increasingly prevalent, it is essential for companies developing pharmaceutical or biotech products to navigate the regulatory landscape carefully. The use of AI in various stages of clinical trials, including trial design, administration, patient recruitment, and data analysis, raises several regulatory considerations.
The implications of AI use in these contexts are of significant interest to regulators, particularly concerning the trial’s integrity and the safety of subjects involved. Although the AI Act may not cover certain research applications of AI, its exclusions offer limited reassurance for clinical trials. A complex framework of regulations and guidance applies to these scenarios.
Data, Data Everywhere – Can Any of It Be Used?
AI serves as a powerful tool for sorting and analyzing clinical data, playing a pivotal role in product development. However, for regulators to trust the outcomes of clinical trials utilizing AI, several critical factors must be addressed:
- Regulatory Impact Assessment: A thorough assessment is necessary to determine whether the AI/machine learning (ML) application is low or high risk concerning the trial and its regulatory implications. For instance, the European Medicines Agency (EMA) identifies high-risk uses, such as those involving treatment assignment or dosing decisions.
- Transparency: Clear communication with regulators about AI usage is crucial. Regulators need to evaluate whether the AI applications are suitable and yield reliable conclusions for authorizations. The EMA emphasizes the need for comprehensive regulatory assessments, which include full disclosure of model architecture and logs from development, validation, and testing.
- Compliance with Data Standards: Adhering to established data standards is vital. This includes compliance with ICH E6 GCP guidelines on data integrity and the latest drafts concerning AI applications in clinical trials. Furthermore, statistical principles outlined in ICH E9 are essential for ensuring the reliability of clinical data.
- Thoughtful Use of AI: The EMA provides guiding principles for using large language models (LLMs) in regulatory science. Key recommendations include avoiding the input of sensitive data into non-secure LLMs and critically evaluating the outputs for reliability before inclusion in regulatory documents.
Regulators are wary of unreliable LLM outputs, fearing that inaccuracies could remain undetected, leading to potential regulatory issues.
Use of Medical Devices Including an AI System
When pharmaceutical and biotech trials incorporate medical devices or in vitro diagnostics, compliance with additional legislation is required alongside the Clinical Trials Regulation (EU) 536/2014. The use of such devices in clinical trials in the EU/EEA is categorized as either ‘placing on the market’ or ‘putting into service’. If a device is not yet CE marked, notification to the competent authority for a clinical or performance study is mandatory, with authorizations often needed.
In cases where the device incorporates an AI system classified as high-risk under the AI Act, it is unlikely to qualify for exemptions related to scientific research and development. Consequently, the device must meet rigorous testing standards for high-risk AI systems as outlined in Articles 60 and 61 of the AI Act.
The interplay between the AI Act and EU medical device regulations creates complexities that may only be resolved through comprehensive guidance from the EU, necessitating a deep understanding of clinical trial operations and medical device regulations.
In conclusion, as AI continues to transform the landscape of clinical trials, stakeholders must remain vigilant in understanding and adhering to the evolving regulatory frameworks to ensure the safety and efficacy of their products.