How a Global Biopharma Became a Leader in Ethical AI
Artificial intelligence (AI) has emerged as a key agent of transformation, providing enhanced insights and efficiency across all industries, including the biopharmaceutical sector.
Identifying Gaps in AI Governance
A global biopharmaceutical company conducted an internal AI assessment to evaluate the maturity of its processes. This assessment revealed a significant gap: the absence of an AI governance framework. Recognizing the importance of addressing this gap was crucial for the company to harness the opportunities of AI while mitigating risks associated with technical, social, and ethical domains, such as decision-making bias and privacy violations.
Developing a Comprehensive Framework
The firm developed a comprehensive AI governance framework that embraced responsible AI principles, including transparency, fairness, and human-centricity. The Chief Information Officer emphasized, “We have a collective responsibility to manage AI risk. Our AI ethics principles are an integral part of our AI risk management strategy.”
The Role of Independent Partners
To ensure they were on the right path, the biopharma sought an independent partner for assurance. An external review revealed that the company was not consistently managing project-specific AI risks in accordance with its responsible AI principles. The EY assessment identified various gaps that allowed the company to establish minimum requirements for business teams working with AI.
Early Adoption and Continuous Improvement
As an early adopter of AI risk management in the biopharmaceutical industry, the company benefited from the confidence and insights provided by EY. The Chief Information Officer noted, “Partnering with EY provided external validation in our approach and highlighted areas needing additional focus.”
Necessary Changes in AI Governance
The EY review prompted the biopharma to recognize the need for substantial changes in its AI governance approach. Key recommendations included:
- Improved third-party AI risk assessment
- Establishment of a central AI inventory to aid in risk management and regulatory compliance
The review emphasized that there is no one-size-fits-all method for AI governance. Federated companies with distributed autonomy must achieve consistency across multiple units, while centralized organizations must adopt a different strategy.
Customization of Governance Reviews
It became apparent that if an independent review finds an organization’s AI governance inadequate, leadership must be willing to implement structural changes or form a governance board for better alignment. The customization of such reviews is essential as organizational structures, leadership, and accountabilities vary widely.
Collaborative Assessment Process
EY teams collaborated with the biopharma to create a responsible AI assessment tailored to their needs. The global responsible AI framework served as a flexible set of guiding principles and practical actions.
Multi-disciplinary EY teams consisting of digital ethicists, IT risk practitioners, data scientists, and subject-matter experts evaluated the biopharma’s responsible AI principles and how they were implemented across the organization. This included an assessment of key AI projects such as forecasting, adverse event tracking, and early disease detection.
Conclusion
The evolving ethics of AI present challenges for many companies that lack the in-house capabilities to start or continue their journey. Support from an independent partner can add value by helping organizations develop AI governance processes tailored to their specific business needs. This approach enhances the likelihood of regulatory compliance and positions leadership to protect stakeholders and the public from the risks associated with AI, ensuring that humans remain at the center of this transformative technology.