UK Delays Comprehensive AI Regulation Amid Copyright Concerns

UK Ministers Delay AI Regulation Amid Plans for Comprehensive Bill

Proposals to regulate artificial intelligence have been postponed by at least a year as UK ministers aim to introduce a more extensive legislation addressing various aspects of AI and its use of copyrighted material.

Comprehensive AI Bill in the Works

The technology secretary, Peter Kyle, has announced plans to draft a “comprehensive” AI bill in the next parliamentary session. This legislation is intended to tackle pressing concerns surrounding safety and copyright issues related to AI technology. However, the bill will not be ready before the next king’s speech, which is anticipated in May 2026, raising worries about the ongoing delays in regulating AI technologies.

Initial Plans and Setbacks

Originally, Labour had intended to present a short and narrowly focused AI bill shortly after taking office. This bill would have concentrated on large language models like ChatGPT and mandated companies to submit their models for testing by the UK’s AI Security Institute. The aim was to mitigate risks associated with advanced AI models that could potentially threaten humanity.

However, this initial bill faced delays as ministers chose to align their plans with the Trump administration in the US. There were concerns that any form of regulation could diminish the UK’s appeal to AI companies.

Inclusion of Copyright Rules

Ministers now seek to incorporate copyright regulations into the forthcoming AI bill. A government source indicated, “We feel we can use that vehicle to find a solution on copyright.” Meetings are ongoing with both tech leaders and creators to discuss innovative approaches moving forward, set to intensify once the data bill is passed.

Standoff with the House of Lords

The government is currently at an impasse with the House of Lords concerning copyright provisions in a separate data bill. This legislation would enable AI companies to train their models using copyrighted material unless rights holders opt out, a move that has ignited fierce backlash from the creative sector.

High-profile artists such as Elton John, Paul McCartney, and Kate Bush have rallied against these proposed changes, advocating for stronger protections for creators.

Recent Developments and Reactions

This week, peers in the House of Lords supported an amendment to the data bill mandating AI companies to disclose whether they utilize copyrighted material in training their models, aiming to enforce current copyright law. Nevertheless, ministers have resisted calls to amend their stance, with Kyle expressing regret over the government’s approach.

The government insists that the data bill is not the appropriate mechanism for addressing copyright issues. They have pledged to publish an economic impact assessment and a series of technical reports concerning copyright and AI.

Public Sentiment and Future Directions

According to a survey by the Ada Lovelace Institute and the Alan Turing Institute, a significant majority of the UK public (88%) believes that the government should have the authority to halt the use of an AI product if it poses a serious risk. Furthermore, over 75% of respondents indicated that AI safety should be overseen by the government or regulators rather than by private companies alone.

Experts suggest that the UK is strategically positioning itself between the US and the EU, attempting to avoid overly stringent regulations that could stifle innovation while exploring meaningful protections for consumers. This ongoing balancing act is crucial as the landscape of AI technology continues to evolve.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...