EU and US Forge New AI Principles for Drug Development

EU and US Regulators Reach Landmark Accord on AI Principles in Drug Development

The European Medicines Agency (EMA) and the US Food and Drug Administration (FDA) have established new AI principles in drug development, aiming to reduce regulatory divergence between the major markets of the European Union and the United States.

This landmark accord is welcomed by industry associations, as it strengthens harmonization across regions; however, they emphasize the need for more concrete steps moving forward.

The Shift from Monitoring to Principles-Based Regulations

As AI technologies become increasingly integrated into evidence generation and analysis in drug development, regulators are transitioning from mere monitoring to establishing principles-based guardrails. This shift aims to enhance accountability, integrity, and performance of the emerging technology.

The accord is expected to significantly influence global AI use in drug development, as the regulatory weight of EMA and FDA decisions sets standards worldwide.

Guiding Principles and Renewed Cooperation

The AI principles, published on January 14, mark the culmination of a two-year process aimed at addressing the regulatory divergence that has posed a significant barrier to digital innovation in the pharmaceutical sector. According to Olivér Várhelyi, European Commissioner for Health and Animal Welfare, these guiding principles are a first step towards renewed EU-US cooperation in the field of novel medical technologies.

The goal is to preserve a leading role in the global innovation race while ensuring the highest level of patient safety.

Industry Response

Industry associations, such as the European Federation of Pharmaceutical Industries and Associations (EFPIA), view the joint effort of EMA and FDA in developing these principles as an important move towards global regulatory convergence. EFPIA represents the interests of 36 national associations and 40 pharmaceutical companies operating in Europe.

Comprehensive Governance of AI in Drug Development

The AI principles are designed to govern the use of AI technology throughout its lifecycle in drug development—from early-stage drug discovery to clinical trials and post-market safety monitoring. This ensures that patient safety and ethical integrity are prioritized, emphasizing a human-centric approach and oversight.

With these new principles, EMA and FDA aim to dismantle the “AI black box,” which often leaves users and patients unaware of the processes behind result generation. Regulatory bodies state that AI must be applied in well-defined contexts and be understandable to all parties involved.

Addressing Challenges and Enhancing Accountability

A crucial aspect of the joint document is its focus on tackling the issue known as “shadow use,” where analysts work with large language models (LLMs) in daily tasks while leadership may overlook this practice. Enhanced oversight is required to counteract this.

Furthermore, the principles mandate that data scientists collaborate closely with clinical leads throughout the drug development process, ensuring that clinical teams understand the technical tools being utilized.

Instead of relying on single validation, continuous monitoring for “data drift”—the degradation of AI performance over time due to changes in the underlying data environment—is now a requirement.

Next Steps and Future Implications

While the new AI principles align with existing regulations, they do not drastically alter how AI is currently employed in the European industry. However, they lay the groundwork for a unified language in medical technology development, which could have extensive implications for global AI regulation in drug development.

According to EFPIA, these principles facilitate a more coherent environment for scaling AI tools globally and for engaging with regulators consistently. While the guidelines represent a solid foundation for minimizing duplicative or divergent requirements across regions, they remain high-level. Further steps toward establishing shared terminology, definitions, and concepts are anticipated in the near future.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...