Regulating AI in Clinical Trials: Key Considerations

The Regulation of Clinical Trials Involving AI

As the integration of artificial intelligence (AI) into clinical trials becomes increasingly prevalent, it is essential for companies developing pharmaceutical or biotech products to navigate the regulatory landscape carefully. The use of AI in various stages of clinical trials, including trial design, administration, patient recruitment, and data analysis, raises several regulatory considerations.

The implications of AI use in these contexts are of significant interest to regulators, particularly concerning the trial’s integrity and the safety of subjects involved. Although the AI Act may not cover certain research applications of AI, its exclusions offer limited reassurance for clinical trials. A complex framework of regulations and guidance applies to these scenarios.

Data, Data Everywhere – Can Any of It Be Used?

AI serves as a powerful tool for sorting and analyzing clinical data, playing a pivotal role in product development. However, for regulators to trust the outcomes of clinical trials utilizing AI, several critical factors must be addressed:

  • Regulatory Impact Assessment: A thorough assessment is necessary to determine whether the AI/machine learning (ML) application is low or high risk concerning the trial and its regulatory implications. For instance, the European Medicines Agency (EMA) identifies high-risk uses, such as those involving treatment assignment or dosing decisions.
  • Transparency: Clear communication with regulators about AI usage is crucial. Regulators need to evaluate whether the AI applications are suitable and yield reliable conclusions for authorizations. The EMA emphasizes the need for comprehensive regulatory assessments, which include full disclosure of model architecture and logs from development, validation, and testing.
  • Compliance with Data Standards: Adhering to established data standards is vital. This includes compliance with ICH E6 GCP guidelines on data integrity and the latest drafts concerning AI applications in clinical trials. Furthermore, statistical principles outlined in ICH E9 are essential for ensuring the reliability of clinical data.
  • Thoughtful Use of AI: The EMA provides guiding principles for using large language models (LLMs) in regulatory science. Key recommendations include avoiding the input of sensitive data into non-secure LLMs and critically evaluating the outputs for reliability before inclusion in regulatory documents.

Regulators are wary of unreliable LLM outputs, fearing that inaccuracies could remain undetected, leading to potential regulatory issues.

Use of Medical Devices Including an AI System

When pharmaceutical and biotech trials incorporate medical devices or in vitro diagnostics, compliance with additional legislation is required alongside the Clinical Trials Regulation (EU) 536/2014. The use of such devices in clinical trials in the EU/EEA is categorized as either ‘placing on the market’ or ‘putting into service’. If a device is not yet CE marked, notification to the competent authority for a clinical or performance study is mandatory, with authorizations often needed.

In cases where the device incorporates an AI system classified as high-risk under the AI Act, it is unlikely to qualify for exemptions related to scientific research and development. Consequently, the device must meet rigorous testing standards for high-risk AI systems as outlined in Articles 60 and 61 of the AI Act.

The interplay between the AI Act and EU medical device regulations creates complexities that may only be resolved through comprehensive guidance from the EU, necessitating a deep understanding of clinical trial operations and medical device regulations.

In conclusion, as AI continues to transform the landscape of clinical trials, stakeholders must remain vigilant in understanding and adhering to the evolving regulatory frameworks to ensure the safety and efficacy of their products.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...