UK AI Copyright Rules Risk Innovation and Equity

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They argue that such restrictions will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate.

The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out.

Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies also voiced concerns, arguing that the system would complicate the legal use of content, restrict commercial applications, and demand excessive transparency.

Opt-out Regimes May Result in Poorly Trained AI and Minimal Income for Rightsholders

Benjamin White, founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries. Since copyright is designed to stimulate investment by protecting intellectual property, he emphasized the broader economic impact of any restrictions.

He stated, “The rules that affect singers affect scientists, and the rules that affect clinicians affect composers as well. Copyrights are sort of a horizontal one-size-fits-all.” White expressed concern over the framing of the consultation, noting it overlooks the potential benefits of knowledge sharing in advancing academic research, which offers widespread advantages for society and the economy.

White highlighted the limitations of existing exceptions, which do not allow universities or NHS trusts to share training or analysis data derived from copyright materials, such as journal articles.

Bertin Martens, senior fellow at the economic think tank Bruegel, criticized the media industries for wanting to benefit from AI while simultaneously withholding their data for training. “If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed,” he explained.

Martens noted that even large AI companies would find it infeasible to sign licenses with numerous small publishers due to excessive transaction costs, leading to biased models with incomplete information.

Julia Willemyns, co-founder of the tech policy research project UK Day One, warned that the opt-out regime might not be effective, as jurisdictions with less restrictive laws will still allow access to the same content for training. She cautioned that blocking access from those jurisdictions could deprive the UK of the best available models, ultimately slowing down technology diffusion and harming productivity.

Economic Implications for Creators

Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. Willemyns explained, “The problem is that every piece of data isn’t worth very much to the models; these models operate at scale.” Even with global enforcement of licensing regimes, the economic benefits for creators would likely be minimal, leading to a trade-off between national economic effects and negligible positives.

Willemyns also cautioned against overcomplicating the UK’s copyright approach by requiring separate regimes for AI training on scientific and creative materials, which could create legal uncertainty, burden the courts, and deter business adoption.

Conclusion

Policy experts agree that a text and data mining exemption would simplify the legal landscape and help maximize AI’s potential. As the debate continues, the need for a balanced approach that fosters innovation while protecting creator rights remains critical.

In summary, the UK’s proposed copyright rules for AI training could inadvertently lead to adverse outcomes, including biased AI models and insufficient compensation for creators. As the landscape evolves, ongoing discussions and consultations will be essential in shaping a framework that benefits all stakeholders involved.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...