Pharma Races Ahead on AI, but Governance Struggles to Keep Pace
The use of AI in pharmaceutical development is becoming widespread, but the rulebook is still being written.
According to a report by McKinsey, 71% of businesses across various industries now utilize generative AI, with life sciences among the fastest adopters. However, only 53% of these businesses actively mitigate AI risks. This gap leads to unease as pharmaceutical companies deploy autonomous systems across R&D and manufacturing while simultaneously building their governance frameworks.
The conversation has shifted from whether AI belongs in drug discovery or production lines to how businesses can secure innovation without restricting the impact that makes AI so valuable.
Operational Vulnerabilities
AI systems introduce risks that legacy pharmaceutical quality frameworks were not designed to address. One significant risk is model drift, which occurs when AI projections become inaccurate over time as training data deviates from primary production conditions. In drug discovery, this can mean that compounds flagged as promising are based on outdated datasets. In manufacturing, it can result in quality control algorithms that fail to detect process abnormalities.
Another layer of exposure arises from data leakage between contract research companies and manufacturing partners. Proprietary models trained on clinical trial data may inadvertently expose patient information or intellectual property when shared across structural boundaries. Without strict provenance controls, pharmaceutical companies lose visibility into how their data is utilized and whether models remain compliant with privacy regulations.
The added risk of bias continues to distort results. Regulatory reviews conducted by the FDA and EMA between 2024 and 2025 identified cases where AI-driven algorithms systematically excluded certain demographic groups from clinical trials, compromising both regulatory compliance and therapeutic efficiency. The FDA’s 2023 guidance on Cybersecurity in Medical Devices mandates that medical device manufacturers must demonstrate that their systems, including those employing AI or machine learning, are designed with resilience, integrity, and recovery capabilities throughout the product lifecycle. These expectations directly influence clinical trial design, regulatory submissions, and most importantly, patient safety.
Compliance in Action
Regulatory frameworks significantly affect how pharmaceutical companies approach AI governance. The ICH E6(R3) update, finalized in early 2025, strengthens requirements for all computerized systems used in clinical trials, including AI. Section 4.2.2 mandates that metadata relevant to the trial, including audit trails, be retained to allow for reconstructing the course of events. Section 3.16.1 requires that all alterations to data must be attributable, recorded, and reconstructable. These requirements affect every AI-assisted trial submitted for regulatory approval.
The EMA’s Reflection Paper on the Use of Artificial Intelligence in the Medicinal Product Lifecycle reinforces these expectations for machine learning systems. It emphasizes that applicants and marketing authorization holders must ensure that algorithms, models, datasets, and data processing pipelines are transparently documented and auditable. Particularly in clinical trials, model architecture, logs, training data, and processing pipelines should be available for regulatory review, supported by appropriate explainability and governance measures.
This shift in stance has accelerated structural changes within organizations. A reported 13% of businesses overall have created dedicated AI governance or risk roles, recognizing that regulatory expertise must accompany technical capabilities. AI centres of excellence are emerging as operational hubs where data scientists, regulatory affairs professionals, and quality assurance teams collaborate to validate models.
These centres enhance accountability by establishing clear ownership, improve reproducibility by standardizing documentation, and increase speed by preventing compliance failures that delay submissions. Our own AI Adoption Lab provides a secure environment for prototyping and validating models before deployment. The real advantage will go to companies that treat compliance as an operational discipline rather than an administrative burden, gaining approval faster and scaling with confidence.
Future Readiness
The next wave of operational risk is clear. A September 2025 analysis emphasizes that as agentic AI becomes embedded in workflows, companies must establish strong governance frameworks with clear accountability, guardrails to prevent unintended consequences, and regular audits to ensure compliance. Agentic AI systems can execute multi-step workflows across R&D and manufacturing without human oversight. This autonomy accelerates discovery and optimizes production but also creates vulnerabilities.
Observable systems are therefore essential. Each agent decision must be logged, every data source validated, and every action auditable. Continuous monitoring must become the operational standard, not an afterthought. Pharmaceutical companies that build these capabilities now will be prepared for the next decade. Those who defer will face costly remediation when regulatory scrutiny intensifies.
Leadership accountability remains the differentiator. AI security is not a problem solely solvable by engineering teams. It requires executives who understand that trust, validation, and transparency are the foundations of operational speed.
Protecting patients, data, and innovation are no longer separate objectives; they are interdependent priorities that define how pharmaceutical companies operate in the intelligence economy.