Shaping Responsible AI Governance in Healthcare

First Do No Harm: Shaping Health AI Governance in a Changing Global Landscape

The AI regulatory landscape has undergone monumental shifts recently, with a growing rhetoric that views regulation as a barrier to AI innovation and the opportunities it presents across various sectors, including health and life sciences.

In the past six months, changes in the U.S. approach to AI regulation have been evident, particularly with the recent reversal of many initiatives from the previous administration. This shift reflects a broader trend where the EU has also abandoned its proposed AI Liability Directive, favoring a more innovation-friendly approach through its new 2025 Commission work programme.

The UK has adopted a pro-innovation stance in its AI policy-making, although its approach differs from that of the EU and the U.S. Rather than implementing AI-specific regulations, the UK has focused on revising existing sector-specific legislation to support AI developers while protecting the public. This involves enhancing regulators’ enforcement capacities and ensuring that existing regulations do not create conflicting burdens.

However, the UK’s approach presents challenges. The careful revision of regulations to accommodate AI is crucial to avoid duplications and ensure that regulators have the necessary expertise to assess the novel risks posed by these technologies. Effective coordination across various regulatory regimes is essential to prevent harmful developments from slipping through the gaps.

Governance and Liability in Medical AI: Who’s Responsible When AI Goes Wrong?

As AI tools in healthcare begin to outperform human capabilities in specific tasks, the balance between AI autonomy and human oversight becomes increasingly complex. Questions surrounding responsibility and liability are now at the forefront for regulators, developers, and clinicians, especially when harm occurs.

To explore these challenges, a workshop organized by experts examined how various regulatory frameworks intersect, including medical device law, the EU AI Act, data protection, and negligence and product liability law. The discussions highlighted gaps and overlaps in the allocation of responsibilities and liabilities among developers, deployers, and users of medical AI, generating valuable insights for policymakers and regulators.

Synthetic Health Data

Synthetic data holds significant potential for advancing the development of Artificial Intelligence as a Medical Device (AIaMD), particularly in scenarios where real-world data is scarce or sensitive. While existing guidelines provide a solid foundation, the incorporation of synthetic data—especially in regulatory submissions—requires further clarity.

In collaboration with partners, efforts are underway to develop guiding principles that facilitate dialogue between manufacturers and approving bodies regarding the use of synthetic data. The aim is to ensure that regulatory expectations keep pace with rapid innovation while promoting the responsible use of synthetic data for improved patient health outcomes.

Challenges for Post-Market Surveillance of Medical AI

AI presents unique challenges for the regulation of medical devices during their operational phase due to the potential for changes affecting safety and performance. The innovative approach taken by the MHRA through its AI Airlock program exemplifies a multidisciplinary effort to identify and address the novel regulatory challenges posed by AI. Workshops have been conducted to gather insights from clinical, technical, policy, legal, and regulatory experts on monitoring and reporting needs for AI within post-market surveillance frameworks.

Key Themes in AI Governance Projects

  • Agile Regulation and International Alignment: With AI capabilities evolving at a pace that outstrips existing safety mechanisms, regulators must be prepared to adapt dynamically. This requires not only national efforts but also greater regulatory alignment across borders. The UK’s decision not to sign the international AI agreement at the 2025 Global AI Summit heightens the urgency of preventing regulatory fragmentation.
  • Regulatory and Ethical Concerns: While the focus has shifted towards innovation, regulatory and ethical concerns remain pertinent. The renewed emphasis on innovation could lead to the resurgence of associated risks, necessitating a balanced approach that prioritizes safety, especially in health.
  • Resourcing: Ensuring that regulators possess the necessary resources and expertise to keep pace with rapidly evolving AI technologies is crucial. Improved coordination across regulatory domains and internationally is essential to create responsive, future-ready governance. Although there is a trend towards pro-innovation approaches, this must not compromise safety, public trust, or ethical oversight in high-stakes areas like health.

Efforts by various stakeholders aim to navigate this delicate balancing act, promoting the responsible adoption of AI technologies that are not only innovative but also effective, safe, and genuinely enhance patient outcomes.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...