Regulating AI in Clinical Trials: Key Considerations

The Regulation of Clinical Trials Involving AI

As the integration of artificial intelligence (AI) into clinical trials becomes increasingly prevalent, it is essential for companies developing pharmaceutical or biotech products to navigate the regulatory landscape carefully. The use of AI in various stages of clinical trials, including trial design, administration, patient recruitment, and data analysis, raises several regulatory considerations.

The implications of AI use in these contexts are of significant interest to regulators, particularly concerning the trial’s integrity and the safety of subjects involved. Although the AI Act may not cover certain research applications of AI, its exclusions offer limited reassurance for clinical trials. A complex framework of regulations and guidance applies to these scenarios.

Data, Data Everywhere – Can Any of It Be Used?

AI serves as a powerful tool for sorting and analyzing clinical data, playing a pivotal role in product development. However, for regulators to trust the outcomes of clinical trials utilizing AI, several critical factors must be addressed:

  • Regulatory Impact Assessment: A thorough assessment is necessary to determine whether the AI/machine learning (ML) application is low or high risk concerning the trial and its regulatory implications. For instance, the European Medicines Agency (EMA) identifies high-risk uses, such as those involving treatment assignment or dosing decisions.
  • Transparency: Clear communication with regulators about AI usage is crucial. Regulators need to evaluate whether the AI applications are suitable and yield reliable conclusions for authorizations. The EMA emphasizes the need for comprehensive regulatory assessments, which include full disclosure of model architecture and logs from development, validation, and testing.
  • Compliance with Data Standards: Adhering to established data standards is vital. This includes compliance with ICH E6 GCP guidelines on data integrity and the latest drafts concerning AI applications in clinical trials. Furthermore, statistical principles outlined in ICH E9 are essential for ensuring the reliability of clinical data.
  • Thoughtful Use of AI: The EMA provides guiding principles for using large language models (LLMs) in regulatory science. Key recommendations include avoiding the input of sensitive data into non-secure LLMs and critically evaluating the outputs for reliability before inclusion in regulatory documents.

Regulators are wary of unreliable LLM outputs, fearing that inaccuracies could remain undetected, leading to potential regulatory issues.

Use of Medical Devices Including an AI System

When pharmaceutical and biotech trials incorporate medical devices or in vitro diagnostics, compliance with additional legislation is required alongside the Clinical Trials Regulation (EU) 536/2014. The use of such devices in clinical trials in the EU/EEA is categorized as either ‘placing on the market’ or ‘putting into service’. If a device is not yet CE marked, notification to the competent authority for a clinical or performance study is mandatory, with authorizations often needed.

In cases where the device incorporates an AI system classified as high-risk under the AI Act, it is unlikely to qualify for exemptions related to scientific research and development. Consequently, the device must meet rigorous testing standards for high-risk AI systems as outlined in Articles 60 and 61 of the AI Act.

The interplay between the AI Act and EU medical device regulations creates complexities that may only be resolved through comprehensive guidance from the EU, necessitating a deep understanding of clinical trial operations and medical device regulations.

In conclusion, as AI continues to transform the landscape of clinical trials, stakeholders must remain vigilant in understanding and adhering to the evolving regulatory frameworks to ensure the safety and efficacy of their products.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...