Leading the Way in Ethical AI: A Biopharma Transformation

How a Global Biopharma Became a Leader in Ethical AI

Artificial intelligence (AI) has emerged as a key agent of transformation, providing enhanced insights and efficiency across all industries, including the biopharmaceutical sector.

Identifying Gaps in AI Governance

A global biopharmaceutical company conducted an internal AI assessment to evaluate the maturity of its processes. This assessment revealed a significant gap: the absence of an AI governance framework. Recognizing the importance of addressing this gap was crucial for the company to harness the opportunities of AI while mitigating risks associated with technical, social, and ethical domains, such as decision-making bias and privacy violations.

Developing a Comprehensive Framework

The firm developed a comprehensive AI governance framework that embraced responsible AI principles, including transparency, fairness, and human-centricity. The Chief Information Officer emphasized, “We have a collective responsibility to manage AI risk. Our AI ethics principles are an integral part of our AI risk management strategy.”

The Role of Independent Partners

To ensure they were on the right path, the biopharma sought an independent partner for assurance. An external review revealed that the company was not consistently managing project-specific AI risks in accordance with its responsible AI principles. The EY assessment identified various gaps that allowed the company to establish minimum requirements for business teams working with AI.

Early Adoption and Continuous Improvement

As an early adopter of AI risk management in the biopharmaceutical industry, the company benefited from the confidence and insights provided by EY. The Chief Information Officer noted, “Partnering with EY provided external validation in our approach and highlighted areas needing additional focus.”

Necessary Changes in AI Governance

The EY review prompted the biopharma to recognize the need for substantial changes in its AI governance approach. Key recommendations included:

  • Improved third-party AI risk assessment
  • Establishment of a central AI inventory to aid in risk management and regulatory compliance

The review emphasized that there is no one-size-fits-all method for AI governance. Federated companies with distributed autonomy must achieve consistency across multiple units, while centralized organizations must adopt a different strategy.

Customization of Governance Reviews

It became apparent that if an independent review finds an organization’s AI governance inadequate, leadership must be willing to implement structural changes or form a governance board for better alignment. The customization of such reviews is essential as organizational structures, leadership, and accountabilities vary widely.

Collaborative Assessment Process

EY teams collaborated with the biopharma to create a responsible AI assessment tailored to their needs. The global responsible AI framework served as a flexible set of guiding principles and practical actions.

Multi-disciplinary EY teams consisting of digital ethicists, IT risk practitioners, data scientists, and subject-matter experts evaluated the biopharma’s responsible AI principles and how they were implemented across the organization. This included an assessment of key AI projects such as forecasting, adverse event tracking, and early disease detection.

Conclusion

The evolving ethics of AI present challenges for many companies that lack the in-house capabilities to start or continue their journey. Support from an independent partner can add value by helping organizations develop AI governance processes tailored to their specific business needs. This approach enhances the likelihood of regulatory compliance and positions leadership to protect stakeholders and the public from the risks associated with AI, ensuring that humans remain at the center of this transformative technology.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...