Model Behavior: FDA and EMA’s Guide to Good AI in Drug Development
The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) have released a joint statement outlining 10 “Guiding Principles” for the use of artificial intelligence (AI) in drug development. These principles are crucial for stakeholders involved in the design, validation, deployment, or reliance on AI technology in regulated environments.
What Was Released
The statement specifies how AI should be designed, used, and managed in drug development processes. The FDA emphasizes that these principles will help realize the potential of AI while ensuring the reliability of the information, thereby maintaining patient safety and regulatory excellence. The guiding principles address the unique challenges posed by AI applications in drug development.
AI is defined broadly, encompassing technologies that support non-clinical research, clinical studies, manufacturing, and post-marketing safety efforts. It’s important to note that drugs must continue to meet essential requirements for quality, efficacy, and safety, with AI serving to enhance, not diminish, patient protections.
Why This Matters for FDA and EMA Interactions
The FDA and EMA prioritize the reliability and completeness of evidence behind regulatory decisions. While the new principles do not alter existing laws, they align with current regulatory practices. Sponsors and their partners should prepare for inquiries regarding the origins and processing of the data, model testing, and the human oversight of AI-assisted decisions. This proactive alignment with the principles can facilitate smoother submissions and enhance inspection readiness.
The 10 Principles in Plain Language
Human-Centric Design
The first theme emphasizes that AI should reflect ethical and human values. Stakeholders must consider how AI impacts patients and users, integrating protections from the outset to promote public health and prevent foreseeable harm.
Risk-Based Control
Every AI application must have a clear context of use, detailing its function and output usage. Risk assessment should dictate the necessary testing and oversight, with less risky tools requiring lighter controls compared to high-risk tools which necessitate more rigorous safeguards.
Alignment with Standards and Expertise
AI applications should comply with relevant legal, ethical, and regulatory standards. A multidisciplinary approach is recommended, incorporating experts from various fields such as data science, cybersecurity, and patient safety.
Sound Data and Model Practice
It is vital for sponsors to meticulously track and document data sources, processing methods, and analytical decisions. Models must be developed using robust engineering practices, ensuring they are suitable for their intended use and maintaining a balance between interpretability and predictive performance.
Rigorous Evaluation and Lifecycle Control
Performance assessments should consider the entire system, including human interactions with AI. Validation must align with the stated context of use, and quality management systems should oversee the AI lifecycle. Clear communication with users and stakeholders about the AI’s purpose, performance, and limitations is essential.
Fit With Existing FDA and EMA Requirements
The principles are consistent with existing frameworks used by the FDA and EMA to evaluate data integrity and patient safety. The focus on documentation and validation corresponds with long-standing regulatory expectations. By embedding these principles within familiar regulatory processes, the FDA and EMA can effectively implement them without necessitating new rules.
In conclusion, the FDA and EMA’s guiding principles serve as a framework for the responsible integration of AI in drug development, ensuring that patient safety remains paramount while harnessing the transformative potential of technology.