Lack of Resources Greatest Hurdle for Regulating AI
On February 4, 2026, the Joint Committee on Human Rights convened to address the pressing need for effective regulation of artificial intelligence (AI) in the UK. The committee underscored that closer cooperation between regulators, alongside increased funding, is essential to mitigate the potential human rights harms associated with the rapid proliferation of AI systems.
The Regulatory Landscape
Currently, there are over 13 regulators in the UK dealing with aspects of AI; however, there is no dedicated body focused solely on AI regulation. The government has indicated that existing regulatory frameworks should suffice, yet representatives from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO), and Ofcom cautioned that the fragmented approach risks falling behind the fast-paced developments in AI technology.
Mary-Ann Stephenson, chair of the EHRC, emphasized that the most significant barrier to effective regulation is resource allocation. She pointed out that the EHRC’s budget has been stagnant at £17.1 million since 2012, a figure that fails to accommodate the commission’s statutory functions, effectively translating to a 35% cut in real terms.
Legal Framework and Constraints
While the legal framework, primarily established through the Equality Act, is sufficient to address AI-related discrimination and rights violations, the real constraint lies in the capacity and resources of the regulators. This situation leads to a reactive enforcement approach rather than a proactive one. Stephenson reiterated the necessity for the government to ensure that existing regulators receive adequate funding and are able to collaborate effectively.
Calls for a Coordinating Body
The committee has expressed strong interest in establishing a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the regulation of the pharmaceutical industry, highlighting the potential benefits and risks associated with AI technologies.
Regulators have suggested that a coordinating body could enhance cross-regulatory mechanisms instead of creating a single super-regulator. Given that AI serves as a general-purpose technology, regulation may be more effective when distributed among sector-specific regulators.
Current Coordination Efforts
Coordination initiatives, such as the Digital Regulation Cooperation Forum (DRCF), established in July 2020, have been implemented to foster collaboration among regulators. This forum has developed cross-regulatory teams to share insights and build collective strategies on digital matters, including algorithmic processing and digital advertising technologies.
The Challenge of Misinformation
As the discussion unfolded, Andrew Breeze, director for online safety technology policy at Ofcom, made a compelling case for enhanced international regulatory cooperation regarding disinformation produced by AI. He noted that under the UK’s Online Safety Act, there is no authority to regulate the spread of misinformation on social media, a gap that contrasts sharply with regulatory measures present in the European Union.
Age Regulation and Online Safety
Regulators also addressed skepticism surrounding age assurance safeguards in relation to proposed social media restrictions for individuals under 16. Breeze acknowledged that while age assurance presents challenges, it remains crucial for child protection.
Concerns Over Deregulation
In a previous committee hearing in November 2025, concerns were raised regarding the UK government’s deregulatory approach to AI. Critics argued that this approach could exacerbate human rights harms and lead to public disenfranchisement.
Silkie Carlo, director of Big Brother Watch, warned that the government’s optimistic perspective on AI could undermine essential protections against automated decision-making, raising alarms about the potential for mass surveillance enabled by AI technologies.
In summary, the pressing need for enhanced resource allocation, regulatory cooperation, and effective frameworks is paramount in addressing the challenges posed by AI in the UK. As technology continues to evolve rapidly, the implications for human rights and societal welfare are significant, necessitating thoughtful and proactive regulatory strategies.