Shaping Responsible AI Governance in Healthcare

First Do No Harm: Shaping Health AI Governance in a Changing Global Landscape

The AI regulatory landscape has undergone monumental shifts recently, with a growing rhetoric that views regulation as a barrier to AI innovation and the opportunities it presents across various sectors, including health and life sciences.

In the past six months, changes in the U.S. approach to AI regulation have been evident, particularly with the recent reversal of many initiatives from the previous administration. This shift reflects a broader trend where the EU has also abandoned its proposed AI Liability Directive, favoring a more innovation-friendly approach through its new 2025 Commission work programme.

The UK has adopted a pro-innovation stance in its AI policy-making, although its approach differs from that of the EU and the U.S. Rather than implementing AI-specific regulations, the UK has focused on revising existing sector-specific legislation to support AI developers while protecting the public. This involves enhancing regulators’ enforcement capacities and ensuring that existing regulations do not create conflicting burdens.

However, the UK’s approach presents challenges. The careful revision of regulations to accommodate AI is crucial to avoid duplications and ensure that regulators have the necessary expertise to assess the novel risks posed by these technologies. Effective coordination across various regulatory regimes is essential to prevent harmful developments from slipping through the gaps.

Governance and Liability in Medical AI: Who’s Responsible When AI Goes Wrong?

As AI tools in healthcare begin to outperform human capabilities in specific tasks, the balance between AI autonomy and human oversight becomes increasingly complex. Questions surrounding responsibility and liability are now at the forefront for regulators, developers, and clinicians, especially when harm occurs.

To explore these challenges, a workshop organized by experts examined how various regulatory frameworks intersect, including medical device law, the EU AI Act, data protection, and negligence and product liability law. The discussions highlighted gaps and overlaps in the allocation of responsibilities and liabilities among developers, deployers, and users of medical AI, generating valuable insights for policymakers and regulators.

Synthetic Health Data

Synthetic data holds significant potential for advancing the development of Artificial Intelligence as a Medical Device (AIaMD), particularly in scenarios where real-world data is scarce or sensitive. While existing guidelines provide a solid foundation, the incorporation of synthetic data—especially in regulatory submissions—requires further clarity.

In collaboration with partners, efforts are underway to develop guiding principles that facilitate dialogue between manufacturers and approving bodies regarding the use of synthetic data. The aim is to ensure that regulatory expectations keep pace with rapid innovation while promoting the responsible use of synthetic data for improved patient health outcomes.

Challenges for Post-Market Surveillance of Medical AI

AI presents unique challenges for the regulation of medical devices during their operational phase due to the potential for changes affecting safety and performance. The innovative approach taken by the MHRA through its AI Airlock program exemplifies a multidisciplinary effort to identify and address the novel regulatory challenges posed by AI. Workshops have been conducted to gather insights from clinical, technical, policy, legal, and regulatory experts on monitoring and reporting needs for AI within post-market surveillance frameworks.

Key Themes in AI Governance Projects

  • Agile Regulation and International Alignment: With AI capabilities evolving at a pace that outstrips existing safety mechanisms, regulators must be prepared to adapt dynamically. This requires not only national efforts but also greater regulatory alignment across borders. The UK’s decision not to sign the international AI agreement at the 2025 Global AI Summit heightens the urgency of preventing regulatory fragmentation.
  • Regulatory and Ethical Concerns: While the focus has shifted towards innovation, regulatory and ethical concerns remain pertinent. The renewed emphasis on innovation could lead to the resurgence of associated risks, necessitating a balanced approach that prioritizes safety, especially in health.
  • Resourcing: Ensuring that regulators possess the necessary resources and expertise to keep pace with rapidly evolving AI technologies is crucial. Improved coordination across regulatory domains and internationally is essential to create responsive, future-ready governance. Although there is a trend towards pro-innovation approaches, this must not compromise safety, public trust, or ethical oversight in high-stakes areas like health.

Efforts by various stakeholders aim to navigate this delicate balancing act, promoting the responsible adoption of AI technologies that are not only innovative but also effective, safe, and genuinely enhance patient outcomes.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...