UK AI Copyright Rules Risk Innovation and Equity

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They argue that such restrictions will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate.

The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out.

Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies also voiced concerns, arguing that the system would complicate the legal use of content, restrict commercial applications, and demand excessive transparency.

Opt-out Regimes May Result in Poorly Trained AI and Minimal Income for Rightsholders

Benjamin White, founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries. Since copyright is designed to stimulate investment by protecting intellectual property, he emphasized the broader economic impact of any restrictions.

He stated, “The rules that affect singers affect scientists, and the rules that affect clinicians affect composers as well. Copyrights are sort of a horizontal one-size-fits-all.” White expressed concern over the framing of the consultation, noting it overlooks the potential benefits of knowledge sharing in advancing academic research, which offers widespread advantages for society and the economy.

White highlighted the limitations of existing exceptions, which do not allow universities or NHS trusts to share training or analysis data derived from copyright materials, such as journal articles.

Bertin Martens, senior fellow at the economic think tank Bruegel, criticized the media industries for wanting to benefit from AI while simultaneously withholding their data for training. “If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed,” he explained.

Martens noted that even large AI companies would find it infeasible to sign licenses with numerous small publishers due to excessive transaction costs, leading to biased models with incomplete information.

Julia Willemyns, co-founder of the tech policy research project UK Day One, warned that the opt-out regime might not be effective, as jurisdictions with less restrictive laws will still allow access to the same content for training. She cautioned that blocking access from those jurisdictions could deprive the UK of the best available models, ultimately slowing down technology diffusion and harming productivity.

Economic Implications for Creators

Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. Willemyns explained, “The problem is that every piece of data isn’t worth very much to the models; these models operate at scale.” Even with global enforcement of licensing regimes, the economic benefits for creators would likely be minimal, leading to a trade-off between national economic effects and negligible positives.

Willemyns also cautioned against overcomplicating the UK’s copyright approach by requiring separate regimes for AI training on scientific and creative materials, which could create legal uncertainty, burden the courts, and deter business adoption.

Conclusion

Policy experts agree that a text and data mining exemption would simplify the legal landscape and help maximize AI’s potential. As the debate continues, the need for a balanced approach that fosters innovation while protecting creator rights remains critical.

In summary, the UK’s proposed copyright rules for AI training could inadvertently lead to adverse outcomes, including biased AI models and insufficient compensation for creators. As the landscape evolves, ongoing discussions and consultations will be essential in shaping a framework that benefits all stakeholders involved.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...