AI and Copyright: Protecting Creators in a Digital Age

Protecting Artists’ Rights: What Responsible AI Means for the Creative Industries

The global sprint to develop artificial intelligence technologies is intensifying, fueled by substantial investments from both public and private sectors keen to maintain a competitive edge in the AI era. In the UK, the AI industry is predicted to generate £400 billion by 2030. Yet, the regulatory frameworks that govern these advances are often seen as barriers to innovation and investment.

To reduce the potential risks in AI technologies, businesses and public organizations worldwide are increasingly adopting self-regulation to promote responsible AI practices. The Make it Fair Campaign, launched by the UK’s creative industries on February 25, calls on the UK government to support artists and enforce copyright laws through a responsible AI approach.

Responsible AI encompasses a comprehensive framework that addresses various factors, from technical challenges to ethical considerations. As companies develop and incorporate AI technologies, the dialogue must extend beyond algorithms and data integrity to include a thoughtful examination of their social and economic impact.

Initiatives aimed at enhancing transparency and accountability are essential for rebuilding public trust, fostering a collaborative relationship between humans and AI, and paving the way for innovations that are not only effective but welcomed by society.

The need for responsible AI approaches is becoming increasingly urgent as artists deal with serious concerns regarding copyright infringement and job security. In the UK, the creative industries are worth £126 billion, employing 2.4 million people in 2022.

Opportunities and Risks

AI has already transformed nearly every sector, and the creative industries are no exception. Generative AI promises diverse opportunities, from enriching creative processes to delivering personalized audience experiences alongside improvements in efficiency and cost-effectiveness.

As these technologies continue to evolve, providing creators with greater control and improved quality over generated outputs, they are set to become invaluable tools for visual artists, writers, musicians, and producers around the world. However, these opportunities come with substantial risks, particularly concerning intellectual property rights and the potential reshaping of the workforce.

Generative AI systems draw heavily on human creations; without artists’ original contributions, these technologies would be unable to generate new content. Unfortunately, the lack of transparency and regulation for generative AI systems creates an unprecedented environment where copyrighted works are being used without compensation and explicit consent to train AI models.

The same systems that are undermining creators’ intellectual property are also diminishing their job opportunities. As generative AI platforms streamline processes and enhance productivity, they also risk eliminating jobs within the creative industries. As AI-generated outputs proliferate, they may eventually outnumber original works in training models, potentially leading to a cultural landscape dominated by a bland, uniform AI aesthetic.

Balancing AI and Copyright

In January 2025, the UK released the AI Opportunities Action Plan, outlining the government’s strategy for developing AI. While the UK has yet to establish specific legislation regarding AI safety and development, such as the EU’s 2024 AI Act, the plan advocates for a pro-innovation regulatory framework, which may provide a competitive advantage for AI tech companies over more stringent regulations.

Regarding copyright issues, the UK action plan highlights that the current uncertainty surrounding intellectual property protection is hindering AI innovation and ambitions. It references the EU AI Act as a potential model that encourages AI innovation while ensuring copyright holders maintain control over their content.

However, despite being the most ambitious regulation to date—providing clear expectations and guidelines for AI use in the EU—the act falls short of addressing growing concerns about copyright infringement. The act states that any use of copyrighted material requires authorization from the copyright holder unless regulated exceptions apply.

One significant exception is found in the EU Directive 2019/790, which allows the use of copyrighted works for text and data mining purposes. Although copyright holders can opt out of this use or reserve their right to be remunerated through a licensing agreement, exercising this option puts the burden on artists, who might not be aware of the clause or that their creations are being used for AI training models.

This makes it nearly impossible for creators to track the theft of their intellectual property. Even if they identify an infringement, the potential cost of suing an AI company will remain out of reach for most artists.

In a recent consultation on AI and copyright launched by the UK government, artists and cultural organizations were invited to share their views on its proposed approach. Although the results of this survey are yet to be published, ministers appear ready to offer significant concessions over initial proposals. Following weeks of mounting protests by UK artists, officials are now discussing a range of changes that might exempt certain sectors from the opt-out system and would give preferential access to British AI companies.

In a call to action from UK unions, the TUC has demanded that legislation guarantees transparency measures to identify the presence of copyrighted works in training data, enabling artists to exercise their rights regarding their use.

However, copyright challenges don’t stop at national borders. The International AI Safety Report, released after the AI Action Summit in Paris last month, sheds light on this complex issue. Countries have different rules governing online data collection and intellectual property protection, making the global landscape hard to handle.

Adding to the difficulty, AI companies struggle with limited tools to properly source and filter training data based on licenses, complicating their ability to verify usage on a large scale. As a result, many developers are becoming hesitant to share details about the content they use.

Meanwhile, website owners are tightening restrictions on data crawling, effectively blocking content extraction altogether, which in turn might hinder legitimate AI research efforts.

As states navigate the fine line between promoting innovation and safeguarding rights, the conversation around AI and copyright is set to evolve. One thing is certain: the creative industries cannot flourish without the original input of creators.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...