UK AI Copyright Rules Risk Innovation and Equity

UK AI Copyright Rules May Backfire, Causing Biased Models & Low Creator Returns

Barring companies like OpenAI, Google, and Meta from training AI on copyrighted material in the UK may undermine model quality and economic impact, policy experts warn. They argue that such restrictions will lead to bias in model outputs, undermining their effectiveness, while rightsholders are unlikely to receive the level of compensation they anticipate.

The UK government opened a consultation in December 2024 to explore ways to protect the rights of artists, writers, and composers when creative content is used to train AI models. It outlined a system that permits AI developers to use online content for training unless the rightsholder explicitly opts out.

Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent. Tech companies also voiced concerns, arguing that the system would complicate the legal use of content, restrict commercial applications, and demand excessive transparency.

Opt-out Regimes May Result in Poorly Trained AI and Minimal Income for Rightsholders

Benjamin White, founder of copyright reform advocacy group Knowledge Rights 21, argued that regulations on AI training will affect more than just the creative industries. Since copyright is designed to stimulate investment by protecting intellectual property, he emphasized the broader economic impact of any restrictions.

He stated, “The rules that affect singers affect scientists, and the rules that affect clinicians affect composers as well. Copyrights are sort of a horizontal one-size-fits-all.” White expressed concern over the framing of the consultation, noting it overlooks the potential benefits of knowledge sharing in advancing academic research, which offers widespread advantages for society and the economy.

White highlighted the limitations of existing exceptions, which do not allow universities or NHS trusts to share training or analysis data derived from copyright materials, such as journal articles.

Bertin Martens, senior fellow at the economic think tank Bruegel, criticized the media industries for wanting to benefit from AI while simultaneously withholding their data for training. “If AI developers signed licensing agreements with just the consenting publishers or rightsholders, then the data their models are trained on would be skewed,” he explained.

Martens noted that even large AI companies would find it infeasible to sign licenses with numerous small publishers due to excessive transaction costs, leading to biased models with incomplete information.

Julia Willemyns, co-founder of the tech policy research project UK Day One, warned that the opt-out regime might not be effective, as jurisdictions with less restrictive laws will still allow access to the same content for training. She cautioned that blocking access from those jurisdictions could deprive the UK of the best available models, ultimately slowing down technology diffusion and harming productivity.

Economic Implications for Creators

Furthermore, artists are unlikely to earn meaningful income from AI licensing deals. Willemyns explained, “The problem is that every piece of data isn’t worth very much to the models; these models operate at scale.” Even with global enforcement of licensing regimes, the economic benefits for creators would likely be minimal, leading to a trade-off between national economic effects and negligible positives.

Willemyns also cautioned against overcomplicating the UK’s copyright approach by requiring separate regimes for AI training on scientific and creative materials, which could create legal uncertainty, burden the courts, and deter business adoption.

Conclusion

Policy experts agree that a text and data mining exemption would simplify the legal landscape and help maximize AI’s potential. As the debate continues, the need for a balanced approach that fosters innovation while protecting creator rights remains critical.

In summary, the UK’s proposed copyright rules for AI training could inadvertently lead to adverse outcomes, including biased AI models and insufficient compensation for creators. As the landscape evolves, ongoing discussions and consultations will be essential in shaping a framework that benefits all stakeholders involved.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...