As AI Governance Mandates Converge: A Framework for Pharma’s Multi-Jurisdiction Compliance
A new governance architecture for pharmaceutical AI aims to solve multi-jurisdiction compliance before the EU AI Act‘s high-risk obligations take effect in August 2026.
The Compliance Challenge for Pharmaceutical Companies
Pharmaceutical companies that are deploying artificial intelligence into regulated workflows are encountering a significant challenge related not just to the technology itself, but to the governance infrastructure surrounding it. The FDA’s draft guidance issued in January 2025 introduced a risk-based credibility assessment framework for AI models that support regulatory decisions. Meanwhile, the EU AI Act’s high-risk system obligations impact healthcare applications and are set to be enforced starting August 2, 2026. In January 2026, the EMA and FDA jointly published ten guiding principles for AI use across the medicines lifecycle, while pharmacovigilance authorities in APAC markets are tightening expectations for AI-assisted adverse event reporting.
As a result, many pharmaceutical companies are now grappling with multiple AI compliance requirements simultaneously. Most existing IT systems were never designed to handle this level of complexity, leading to a pressing need for AI systems that can meet varying regulatory expectations within a unified platform. This encompasses FDA requirements for audit trails, EU rules on transparency and human oversight under the AI Act, and local pharmacovigilance reporting standards.
The AI Governance Framework for Life Sciences
In response to this complex landscape, Vibhor Agnihotri, who leads U.S. strategy and growth at Zelthy, an AI-enabled platform provider for regulated pharmaceutical operations, has developed the AI Governance Framework for Life Sciences. This framework comprehensively addresses six regulated areas: compliance, regulatory affairs, pharmacovigilance, patient services, advanced therapies, and supply chain traceability. Crucially, it is designed to operate across jurisdictional boundaries without necessitating parallel governance systems for each market.
Agnihotri emphasizes that “every governance failure we see traces back to the same root cause: AI was integrated as an add-on, and the audit trail has a gap in it.” The framework is structured to ensure that governance is an inherent property of the platform rather than an afterthought.
Key Features of the Framework
The framework is built on Zango, Zelthy’s open-source Django-based application framework tailored for enterprise-regulated environments. A defining aspect of Agnihotri’s approach is that governance is enforced by the technology itself, as opposed to relying solely on policy.
For instance, Zango’s role-based access control engine regulates what users and AI agents can initiate, review, approve, or override within any regulated workflow. Both AI actions and human actions are governed by the same permission architecture, eliminating the need for a separate governance layer for machine-generated outputs.
Moreover, the platform’s audit logging captures a tamper-evident, object-level record of every system event, including model version, data inputs, user identity, timestamp, and decision outcome. The workflow engine enforces sequential, role-controlled process steps, ensuring that no AI output can proceed in a regulated workflow without the necessary human review checkpoints being completed and documented.
Implications for Regulatory Compliance
This design choice has significant implications for the upcoming high-risk obligations under the EU AI Act, which will require providers and deployers of high-risk AI systems to implement risk management, human oversight, and technical documentation by August 2026. By incorporating these controls into the platform’s architecture rather than layering them through policy documents, Agnihotri argues that the framework inherently meets the structural requirements that many companies will struggle to add later.
When the FDA or an auditor requests a complete record of every AI-assisted decision, including who reviewed it, what model was in use, and what data was accessed, the answer must originate from the system architecture, not from someone manually compiling spreadsheets. The audit logging and policy framework of Zango make this record a byproduct of normal operations, eliminating the need for reconstruction.
Scalable Governance Across Jurisdictions
A notable design decision within the framework is the separation of structural governance controls from jurisdiction-specific regulatory configurations. The core controls—like role-based access, audit logging, version tracking, and evidence generation—function uniformly across all markets. Jurisdiction-specific rules, reporting formats, escalation thresholds, and submission requirements can be configured per deployment.
This approach allows enterprise pharmaceutical companies operating in FDA, EMA, and APAC regulatory environments to maintain a single governed platform rather than managing multiple compliance systems. It recognizes that while specific regulatory requirements may vary, the fundamental controls that enable auditable AI governance remain consistent across jurisdictions.
Real-World Implementation and Future Challenges
The framework’s initial practical test in the U.S. market occurred with a deployment at a global pharmaceutical company. The client’s IT validation and quality assurance teams assessed the AI governance architecture against documentation, audit evidence, and change control standards applicable to validated computerized systems under FDA 21 CFR Part 11.
Agnihotri noted that the implementation was completed in weeks, significantly shorter than the typical six months required for validated system implementations at large U.S. pharmaceutical companies. He attributes this expedited timeline to the platform’s capacity to generate audit and change control evidence automatically, which obviates the need for validation teams to compile documentation manually.
Agnihotri’s work on the governance framework highlights the broader challenge that the pharmaceutical industry will face as AI governance mandates accelerate. The FDA’s January 2025 draft guidance indicates a formalization of expectations surrounding AI credibility assessments in regulatory submissions. The joint principles published by the EMA and FDA in January 2026 suggest increasing international alignment on foundational AI governance expectations.
As the EU AI Act’s high-risk obligations become enforceable in August 2026, they will impose concrete technical and documentation requirements on AI systems used in healthcare contexts.
Reflecting on his experience, Agnihotri points out, “The companies that will struggle most are those treating AI governance as a compliance exercise rather than an architecture problem. Compliance is a moving target. Architecture is what lets you hit it across every jurisdiction simultaneously.”
Zelthy’s platform is currently operational in over ten countries, with 300 applications in production to serve global pharmaceutical companies. The AI Governance Framework is available for enterprise evaluation.