AI Principles for Safe Drug Development

FDA and EMA Provide Guiding Principles for AI in Drug Development

On January 14, 2026, the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) jointly released the “Guiding Principles of Good AI Practice in Drug Development,” a set of 10 high-level principles intended to steer the safe and responsible use of AI across the product lifecycle.

While not formal industry guidance, the document provides important insights into FDA and EMA thinking on the deployment of AI during drug and biologic product development and signals future regulatory guidance from both regulators.

Scope of the Principles

The FDA and EMA published these principles regarding AI systems used to generate or analyze evidence in nonclinical, clinical, post-marketing, and manufacturing phases for drugs and biologics. The agencies frame the principles as a foundation for future guidance, standards, and harmonized regulatory expectations from international regulators, international standards organizations, and other collaborative bodies.

Benefits and Ethical Considerations

The regulators emphasize that AI can accelerate innovation, reduce time-to-market, strengthen pharmacovigilance, and decrease reliance on animal testing while maintaining existing standards for quality, safety, and efficacy. However, to recognize these benefits, the use of AI during drug and biologic product development should follow the 10 principles.

Key Principles

Key ideas on this list include:

  • Human-centric ethical design
  • Risk-based development, deployment, and performance assessments
  • Data governance, document management, and cybersecurity
  • Data quality and life cycle management

Key Takeaways for Regulated Industry

The principles effectively outline a governance checklist that regulators expect developers to follow, but they stop short of providing concrete, actionable instructions to demonstrate adherence. This leaves developers to interpret how to apply these broad concepts in practice while awaiting more granular recommendations from the agencies.

Recommendations for Companies

In this context, companies should develop and reassess their AI governance frameworks focusing on tangible steps, including:

  • Establishing a formal, cross-functional governance body
  • Implementing a risk-based approach to categorize AI tools and determine appropriate levels of validation
  • Ensuring robust documentation across the AI lifecycle (e.g., data provenance, model selection, and validation reports)
  • Engaging regulators early through pre-submission meetings to align on expectations for novel AI systems

Additionally, sponsors that proactively engage with the FDA in pre-submission meetings regarding expectations and alignment, or, for those systems already in use, reassess their current AI governance frameworks with these principles, will be better positioned in regulatory interactions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...