Regulating AI: The Resource Challenge

Lack of Resources Greatest Hurdle for Regulating AI

On February 4, 2026, the Joint Committee on Human Rights convened to address the pressing need for effective regulation of artificial intelligence (AI) in the UK. The committee underscored that closer cooperation between regulators, alongside increased funding, is essential to mitigate the potential human rights harms associated with the rapid proliferation of AI systems.

The Regulatory Landscape

Currently, there are over 13 regulators in the UK dealing with aspects of AI; however, there is no dedicated body focused solely on AI regulation. The government has indicated that existing regulatory frameworks should suffice, yet representatives from the Equality and Human Rights Commission (EHRC), the Information Commissioner’s Office (ICO), and Ofcom cautioned that the fragmented approach risks falling behind the fast-paced developments in AI technology.

Mary-Ann Stephenson, chair of the EHRC, emphasized that the most significant barrier to effective regulation is resource allocation. She pointed out that the EHRC’s budget has been stagnant at £17.1 million since 2012, a figure that fails to accommodate the commission’s statutory functions, effectively translating to a 35% cut in real terms.

Legal Framework and Constraints

While the legal framework, primarily established through the Equality Act, is sufficient to address AI-related discrimination and rights violations, the real constraint lies in the capacity and resources of the regulators. This situation leads to a reactive enforcement approach rather than a proactive one. Stephenson reiterated the necessity for the government to ensure that existing regulators receive adequate funding and are able to collaborate effectively.

Calls for a Coordinating Body

The committee has expressed strong interest in establishing a dedicated AI regulator. Labour peer Baroness Chakrabarti compared AI regulation to the regulation of the pharmaceutical industry, highlighting the potential benefits and risks associated with AI technologies.

Regulators have suggested that a coordinating body could enhance cross-regulatory mechanisms instead of creating a single super-regulator. Given that AI serves as a general-purpose technology, regulation may be more effective when distributed among sector-specific regulators.

Current Coordination Efforts

Coordination initiatives, such as the Digital Regulation Cooperation Forum (DRCF), established in July 2020, have been implemented to foster collaboration among regulators. This forum has developed cross-regulatory teams to share insights and build collective strategies on digital matters, including algorithmic processing and digital advertising technologies.

The Challenge of Misinformation

As the discussion unfolded, Andrew Breeze, director for online safety technology policy at Ofcom, made a compelling case for enhanced international regulatory cooperation regarding disinformation produced by AI. He noted that under the UK’s Online Safety Act, there is no authority to regulate the spread of misinformation on social media, a gap that contrasts sharply with regulatory measures present in the European Union.

Age Regulation and Online Safety

Regulators also addressed skepticism surrounding age assurance safeguards in relation to proposed social media restrictions for individuals under 16. Breeze acknowledged that while age assurance presents challenges, it remains crucial for child protection.

Concerns Over Deregulation

In a previous committee hearing in November 2025, concerns were raised regarding the UK government’s deregulatory approach to AI. Critics argued that this approach could exacerbate human rights harms and lead to public disenfranchisement.

Silkie Carlo, director of Big Brother Watch, warned that the government’s optimistic perspective on AI could undermine essential protections against automated decision-making, raising alarms about the potential for mass surveillance enabled by AI technologies.

In summary, the pressing need for enhanced resource allocation, regulatory cooperation, and effective frameworks is paramount in addressing the challenges posed by AI in the UK. As technology continues to evolve rapidly, the implications for human rights and societal welfare are significant, necessitating thoughtful and proactive regulatory strategies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...