First Do No Harm: Shaping Health AI Governance in a Changing Global Landscape
The AI regulatory landscape has undergone monumental shifts recently, with a growing rhetoric that views regulation as a barrier to AI innovation and the opportunities it presents across various sectors, including health and life sciences.
In the past six months, changes in the U.S. approach to AI regulation have been evident, particularly with the recent reversal of many initiatives from the previous administration. This shift reflects a broader trend where the EU has also abandoned its proposed AI Liability Directive, favoring a more innovation-friendly approach through its new 2025 Commission work programme.
The UK has adopted a pro-innovation stance in its AI policy-making, although its approach differs from that of the EU and the U.S. Rather than implementing AI-specific regulations, the UK has focused on revising existing sector-specific legislation to support AI developers while protecting the public. This involves enhancing regulators’ enforcement capacities and ensuring that existing regulations do not create conflicting burdens.
However, the UK’s approach presents challenges. The careful revision of regulations to accommodate AI is crucial to avoid duplications and ensure that regulators have the necessary expertise to assess the novel risks posed by these technologies. Effective coordination across various regulatory regimes is essential to prevent harmful developments from slipping through the gaps.
Governance and Liability in Medical AI: Who’s Responsible When AI Goes Wrong?
As AI tools in healthcare begin to outperform human capabilities in specific tasks, the balance between AI autonomy and human oversight becomes increasingly complex. Questions surrounding responsibility and liability are now at the forefront for regulators, developers, and clinicians, especially when harm occurs.
To explore these challenges, a workshop organized by experts examined how various regulatory frameworks intersect, including medical device law, the EU AI Act, data protection, and negligence and product liability law. The discussions highlighted gaps and overlaps in the allocation of responsibilities and liabilities among developers, deployers, and users of medical AI, generating valuable insights for policymakers and regulators.
Synthetic Health Data
Synthetic data holds significant potential for advancing the development of Artificial Intelligence as a Medical Device (AIaMD), particularly in scenarios where real-world data is scarce or sensitive. While existing guidelines provide a solid foundation, the incorporation of synthetic data—especially in regulatory submissions—requires further clarity.
In collaboration with partners, efforts are underway to develop guiding principles that facilitate dialogue between manufacturers and approving bodies regarding the use of synthetic data. The aim is to ensure that regulatory expectations keep pace with rapid innovation while promoting the responsible use of synthetic data for improved patient health outcomes.
Challenges for Post-Market Surveillance of Medical AI
AI presents unique challenges for the regulation of medical devices during their operational phase due to the potential for changes affecting safety and performance. The innovative approach taken by the MHRA through its AI Airlock program exemplifies a multidisciplinary effort to identify and address the novel regulatory challenges posed by AI. Workshops have been conducted to gather insights from clinical, technical, policy, legal, and regulatory experts on monitoring and reporting needs for AI within post-market surveillance frameworks.
Key Themes in AI Governance Projects
- Agile Regulation and International Alignment: With AI capabilities evolving at a pace that outstrips existing safety mechanisms, regulators must be prepared to adapt dynamically. This requires not only national efforts but also greater regulatory alignment across borders. The UK’s decision not to sign the international AI agreement at the 2025 Global AI Summit heightens the urgency of preventing regulatory fragmentation.
- Regulatory and Ethical Concerns: While the focus has shifted towards innovation, regulatory and ethical concerns remain pertinent. The renewed emphasis on innovation could lead to the resurgence of associated risks, necessitating a balanced approach that prioritizes safety, especially in health.
- Resourcing: Ensuring that regulators possess the necessary resources and expertise to keep pace with rapidly evolving AI technologies is crucial. Improved coordination across regulatory domains and internationally is essential to create responsive, future-ready governance. Although there is a trend towards pro-innovation approaches, this must not compromise safety, public trust, or ethical oversight in high-stakes areas like health.
Efforts by various stakeholders aim to navigate this delicate balancing act, promoting the responsible adoption of AI technologies that are not only innovative but also effective, safe, and genuinely enhance patient outcomes.