UK AI Copyright Rules Risk Innovation and Equity

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They argue that such restrictions will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate.

The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out.

Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies also voiced concerns, arguing that the system would complicate the legal use of content, restrict commercial applications, and demand excessive transparency.

Opt-out Regimes May Result in Poorly Trained AI and Minimal Income for Rightsholders

Benjamin White, founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries. Since copyright is designed to stimulate investment by protecting intellectual property, he emphasized the broader economic impact of any restrictions.

He stated, “The rules that affect singers affect scientists, and the rules that affect clinicians affect composers as well. Copyrights are sort of a horizontal one-size-fits-all.” White expressed concern over the framing of the consultation, noting it overlooks the potential benefits of knowledge sharing in advancing academic research, which offers widespread advantages for society and the economy.

White highlighted the limitations of existing exceptions, which do not allow universities or NHS trusts to share training or analysis data derived from copyright materials, such as journal articles.

Bertin Martens, senior fellow at the economic think tank Bruegel, criticized the media industries for wanting to benefit from AI while simultaneously withholding their data for training. “If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed,” he explained.

Martens noted that even large AI companies would find it infeasible to sign licenses with numerous small publishers due to excessive transaction costs, leading to biased models with incomplete information.

Julia Willemyns, co-founder of the tech policy research project UK Day One, warned that the opt-out regime might not be effective, as jurisdictions with less restrictive laws will still allow access to the same content for training. She cautioned that blocking access from those jurisdictions could deprive the UK of the best available models, ultimately slowing down technology diffusion and harming productivity.

Economic Implications for Creators

Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. Willemyns explained, “The problem is that every piece of data isn’t worth very much to the models; these models operate at scale.” Even with global enforcement of licensing regimes, the economic benefits for creators would likely be minimal, leading to a trade-off between national economic effects and negligible positives.

Willemyns also cautioned against overcomplicating the UK’s copyright approach by requiring separate regimes for AI training on scientific and creative materials, which could create legal uncertainty, burden the courts, and deter business adoption.

Conclusion

Policy experts agree that a text and data mining exemption would simplify the legal landscape and help maximize AI’s potential. As the debate continues, the need for a balanced approach that fosters innovation while protecting creator rights remains critical.

In summary, the UK’s proposed copyright rules for AI training could inadvertently lead to adverse outcomes, including biased AI models and insufficient compensation for creators. As the landscape evolves, ongoing discussions and consultations will be essential in shaping a framework that benefits all stakeholders involved.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...