Shaping Responsible AI Governance in Healthcare

First Do No Harm: Shaping Health AI Governance in a Changing Global Landscape

The AI regulatory landscape has undergone monumental shifts recently, with a growing rhetoric that views regulation as a barrier to AI innovation and the opportunities it presents across various sectors, including health and life sciences.

In the past six months, changes in the U.S. approach to AI regulation have been evident, particularly with the recent reversal of many initiatives from the previous administration. This shift reflects a broader trend where the EU has also abandoned its proposed AI Liability Directive, favoring a more innovation-friendly approach through its new 2025 Commission work programme.

The UK has adopted a pro-innovation stance in its AI policy-making, although its approach differs from that of the EU and the U.S. Rather than implementing AI-specific regulations, the UK has focused on revising existing sector-specific legislation to support AI developers while protecting the public. This involves enhancing regulators’ enforcement capacities and ensuring that existing regulations do not create conflicting burdens.

However, the UK’s approach presents challenges. The careful revision of regulations to accommodate AI is crucial to avoid duplications and ensure that regulators have the necessary expertise to assess the novel risks posed by these technologies. Effective coordination across various regulatory regimes is essential to prevent harmful developments from slipping through the gaps.

Governance and Liability in Medical AI: Who’s Responsible When AI Goes Wrong?

As AI tools in healthcare begin to outperform human capabilities in specific tasks, the balance between AI autonomy and human oversight becomes increasingly complex. Questions surrounding responsibility and liability are now at the forefront for regulators, developers, and clinicians, especially when harm occurs.

To explore these challenges, a workshop organized by experts examined how various regulatory frameworks intersect, including medical device law, the EU AI Act, data protection, and negligence and product liability law. The discussions highlighted gaps and overlaps in the allocation of responsibilities and liabilities among developers, deployers, and users of medical AI, generating valuable insights for policymakers and regulators.

Synthetic Health Data

Synthetic data holds significant potential for advancing the development of Artificial Intelligence as a Medical Device (AIaMD), particularly in scenarios where real-world data is scarce or sensitive. While existing guidelines provide a solid foundation, the incorporation of synthetic data—especially in regulatory submissions—requires further clarity.

In collaboration with partners, efforts are underway to develop guiding principles that facilitate dialogue between manufacturers and approving bodies regarding the use of synthetic data. The aim is to ensure that regulatory expectations keep pace with rapid innovation while promoting the responsible use of synthetic data for improved patient health outcomes.

Challenges for Post-Market Surveillance of Medical AI

AI presents unique challenges for the regulation of medical devices during their operational phase due to the potential for changes affecting safety and performance. The innovative approach taken by the MHRA through its AI Airlock program exemplifies a multidisciplinary effort to identify and address the novel regulatory challenges posed by AI. Workshops have been conducted to gather insights from clinical, technical, policy, legal, and regulatory experts on monitoring and reporting needs for AI within post-market surveillance frameworks.

Key Themes in AI Governance Projects

  • Agile Regulation and International Alignment: With AI capabilities evolving at a pace that outstrips existing safety mechanisms, regulators must be prepared to adapt dynamically. This requires not only national efforts but also greater regulatory alignment across borders. The UK’s decision not to sign the international AI agreement at the 2025 Global AI Summit heightens the urgency of preventing regulatory fragmentation.
  • Regulatory and Ethical Concerns: While the focus has shifted towards innovation, regulatory and ethical concerns remain pertinent. The renewed emphasis on innovation could lead to the resurgence of associated risks, necessitating a balanced approach that prioritizes safety, especially in health.
  • Resourcing: Ensuring that regulators possess the necessary resources and expertise to keep pace with rapidly evolving AI technologies is crucial. Improved coordination across regulatory domains and internationally is essential to create responsive, future-ready governance. Although there is a trend towards pro-innovation approaches, this must not compromise safety, public trust, or ethical oversight in high-stakes areas like health.

Efforts by various stakeholders aim to navigate this delicate balancing act, promoting the responsible adoption of AI technologies that are not only innovative but also effective, safe, and genuinely enhance patient outcomes.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...