AI and Artists: Safeguarding Creative Rights in a Digital Era

Protecting Artists’ Rights: What Responsible AI Means for the Creative Industries

The development of artificial intelligence (AI) technologies is rapidly accelerating, driven by significant investments from both public and private sectors striving to gain a competitive edge in the AI landscape. As the UK AI industry is projected to generate £400 billion by 2030, the existing regulatory frameworks are often viewed as obstacles to innovation and investment.

The Need for Responsible AI

In response to the potential risks associated with AI technologies, businesses and public organizations worldwide are increasingly adopting self-regulation to promote responsible AI practices. Initiatives like the Make it Fair Campaign, launched by the UK’s creative industries on February 25, urge the government to support artists and enforce copyright laws through a responsible AI approach.

Responsible AI encompasses a comprehensive framework addressing technical challenges and ethical considerations. As companies develop and integrate AI technologies, discussions must extend beyond algorithms and data integrity to include a thoughtful examination of their social and economic impact.

Opportunities and Risks in the Creative Industries

AI has transformed nearly every sector, including the creative industries. Generative AI offers various opportunities, enhancing creative processes and delivering personalized audience experiences while improving efficiency and cost-effectiveness. However, these advancements come with substantial risks, particularly concerning intellectual property rights and the potential reshaping of the workforce.

Generative AI systems rely heavily on human creations; without artists’ original contributions, these technologies would struggle to generate new content. Unfortunately, the lack of transparency and regulation for generative AI systems creates an environment where copyrighted works are often used without compensation and explicit consent to train AI models.

Moreover, as generative AI platforms streamline processes and enhance productivity, they risk eliminating jobs within the creative industries. As AI-generated outputs proliferate, they may outnumber original works in training models, leading to a cultural landscape dominated by a bland, uniform AI aesthetic.

Balancing AI and Copyright

In January 2025, the UK unveiled the AI Opportunities Action Plan, outlining the government’s strategy for AI development. The UK has yet to establish specific legislation addressing AI safety and development, unlike the EU’s 2024 AI Act, which advocates for a pro-innovation regulatory framework.

Regarding copyright issues, the UK action plan highlights that uncertainty surrounding intellectual property protection hinders AI innovation. It points to the EU AI Act as a potential model encouraging AI innovation while ensuring copyright holders maintain control over their content.

However, despite being the most ambitious regulation to date, the act does not adequately address growing concerns about copyright infringement. It stipulates that any use of copyrighted material requires authorization from the copyright holder unless regulated exceptions apply, complicating the landscape for artists.

Even though copyright holders can opt out of their works being used for AI training, the burden falls on artists who may not be aware that their creations are being utilized. This situation makes it nearly impossible for creators to track the theft of their intellectual property.

Global Challenges and the Future of AI and Copyright

The International AI Safety Report, released after the AI Action Summit in Paris, highlights the complex issues surrounding copyright in a global context. Different countries have varying rules governing online data collection and intellectual property protection, complicating the landscape for AI companies.

As states navigate the balance between promoting innovation and safeguarding rights, the conversation around AI and copyright continues to evolve. One certainty remains: the creative industries cannot thrive without the original input of creators.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...