Impact of EU AI Act on Drug Development Innovation

EU AI Act Could ‘Set Back’ Benefits of AI in Drug Development If Applied to R&D

The EU’s Artificial Intelligence (AI) Act was formally adopted last year, with many of its provisions expected to apply this year or next. For pharmaceutical and biotech companies, however, ambiguities in the legislative text may raise concerns around how extensively the Act will apply to drug R&D activities.

Concerns from Industry Experts

This potential challenge was highlighted by Stephen Reese, co-chair of the healthcare & life sciences sector group and co-head of the intellectual property group at the international law firm Clifford Chance.

The key question for drug developers in relation to the AI Act is how much it will impact drug development activities. “Some in the industry are expressing concerns that AI legislation is more far-reaching than potentially necessary, as it could extend deep into aspects of the early stage of the drug lifecycle,” Reese stated.

He emphasized the importance of assessing where AI systems are being used for R&D, which could include theoretical drug discovery analyses, where companies may use AI to process data and gain insights into disease pathways.

Regulatory Ambiguities and Exemptions

The AI Act covers all sectors, and although AI tools used only for R&D are theoretically exempt from the legislation’s strict rules, it remains unclear how this exemption would work in practice.

While the AI Act stipulates that the company marketing or trademarking an AI product is responsible for ensuring compliance, researchers may still need to review their license terms, contracts, warranties, and liabilities when using AI tools.

Reese noted that the AI Act, if applied to drug discovery activities, could make the UK a more attractive destination for research than the EU.

AI in Clinical Trials and Bias Risks

AI could also be effective in clinical trials, for example, to find the right patient population groups to target for recruitment. In this case, companies would use AI to process large amounts of data to identify suitable substances and the right groups of people for clinical trials.

However, Reese pointed out that there is a risk of unexpected bias when using AI to select participants for clinical trials. This could skew results compared to a more randomized recruitment process.

Regulatory Sandboxes: A Double-Edged Sword

One aspect of the AI Act that has been largely welcomed across industries, including pharma, is the introduction of “regulatory sandboxes,” which provide controlled environments for developers to test their products under regulatory supervision.

However, there are “question marks” over what can be included in a sandbox regime. Sandboxing could be employed to explore initial drug discovery work or the use of digital twins for testing a product’s efficacy and safety profile.

Digital twins are virtual replicas of biological systems designed to simulate the effects of drugs on real-life patients. While AI-enabled digital twins could introduce regulatory complexities if widely adopted, Reese mentioned that the industry is still in the early stages of using digital twins.

Conclusion

Drug regulators will expect stronger proof cases before incorporating digital twin technology into assessing medicines. Established pre-clinical and clinical study methods are unlikely to be replaced by digital twin technology in the near future.

This article is part of a series that discusses the implications of the EU AI Act on drug development, including unintended data ownership and intellectual property challenges.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...