Making AI Regulation Work in the Real World of Healthcare
On February 2, 2026, the UKAI Life Sciences Working Group submitted a response to the Medicines and Healthcare Products Regulatory Agency (MHRA) regarding the National Commission into the Regulation of AI in Healthcare. This submission was informed by a recent roundtable, co-chaired with Curia, which brought together stakeholders from the NHS, regulatory bodies, industry, technology providers, and policy experts.
The Current State of AI Regulation in Healthcare
The central insight from the discussions is that AI in healthcare is already regulated. Existing frameworks such as medical device regulation, pharmacovigilance, and clinical governance are in place. However, the evolving nature of software-driven systems raises questions about how effectively these regulations apply, particularly as AI systems can change over time and behave differently in various contexts.
Challenges with Software Behavior
Unlike traditional hardware, much of the regulatory framework is designed for static products. AI-enabled systems can update regularly, learn from new data, and be deployed in unpredictable ways. This variability complicates the regulatory landscape, as highlighted during the roundtable discussions led by participants including Dr. Mani Hussain of the MHRA.
Post-Deployment Concerns
One critical area of focus was the importance of monitoring AI systems after deployment. Existing concepts of vigilance, such as incident review processes, must be adapted to accommodate AI systems that can change incrementally. Without effective post-deployment oversight, confidence in AI applications will remain tenuous, regardless of the rigor of pre-market assessments.
The Importance of Risk Proportionality
A recurring theme throughout the discussions was the need for risk-based proportionality. Not all AI applications carry the same risks; thus, systems designed for low-risk tasks should not be regulated in the same manner as those influencing critical clinical decisions. A failure to distinguish these nuances may hinder innovation where it poses the least danger while neglecting areas that require more stringent oversight.
Clarity of Responsibility
Responsibility across the AI lifecycle is another major concern. As AI systems become more integrated into workflows, the roles of manufacturers, healthcare organizations, and clinicians can become blurred. Clear allocation of responsibilities is essential for ensuring accountability and building confidence. Clinicians must understand their responsibilities, while organizations need clarity on governance obligations, and developers require predictable expectations.
Ongoing Conversations About Regulation
The submission to the MHRA and the accompanying roundtable signify a practical assessment of the current state of AI regulation in healthcare. For the UKAI Life Sciences Working Group, this initiative marks a continuation of collaborative efforts with Curia, the MHRA, and industry stakeholders to develop workable governance for AI within healthcare delivery.
The future of AI in the NHS will not be determined merely by the existence of regulations, but by how effectively they are applied to reflect the realities of healthcare operations and technology utilization.