Reforming EU Copyright for AI Innovation

EU Copyright Needs Reform to Spur Fair AI

The 2019 European Union Copyright Directive includes an exception to allow commercial text and data mining, but offers rightsholders the possibility under certain conditions to opt-out. In the 2024 AI Act, the European Union extended these rules to AI.

The recent AI Action Summit in Paris recognized the need to reflect on IP rights. In Europe, the opt-out system creates legal uncertainties, posing an obstacle to AI innovation, and is likely to fail to secure fair remuneration to creators. A proposal suggests replacing the opt-out system with a statutory remuneration right. This would ensure an attractive AI environment while respecting the interests of authors.

A note of clarification: the proposal relates to generative AI, which focuses on content creation, producing outputs that include text, pictures, videos, and music. Leading generative AI models include ChatGPT, Stable Diffusion, and, more recently, Deepseek’s R1. The training of these systems often relies on copyrighted works, raising copyright issues.

According to the EU AI Act, developers of foundation models should disclose a sufficiently detailed summary of the content used for training their models. This obligation is intended to help rightsholders opt-out and, once opted-out, to propose licenses for their content. Without any reporting requirement, it would be almost impossible to discover that their work was used to train AI.

The ‘opt-out’ mechanism is problematic. It can lead to incomplete training data sets. The old adage in information systems “garbage in, garbage out” applies to AI systems. Opt-outs can also mean that AI models are trained on works in regions not subject to European rules, potentially underrepresenting the continent’s linguistic and cultural diversity. Under the present EU copyright law, the rules on how to opt-out remain unclear.

Instead of the inefficient opt-out procedure, a statutory remuneration right seems the right way forward. It avoids the problem of long negotiations and agreements that come with individual licensing agreements. If AI operators must reach a deal with each and every rightsholder, they incur gigantic transaction costs – and risk being left with incomplete datasets inhibiting AI development. Europe, which already struggles to build its own tech industry, would fall further behind.

A licensing nightmare creates immense, potentially insurmountable, barriers to entry. It privileges tech giants over start-up innovators. Only the largest tech companies have the means to engage in costly licensing. At the same time, authors may not receive appropriate remuneration. Many of the first license deals between big AI companies and publishers are being signed without the consent of content creators, and the redistribution of the sum collected to the authors is uncertain, depending on the authors’ contracts.

An open question remains: what represents fair remuneration? Self-regulation by various market players will fail to provide clear answers. It will favor the big players on both sides of AI and content industries, at the expense of start-ups and individual creators.

In the past, copyright law has successfully evolved to deal with new technologies. Many national legislators have used statutory remuneration rights to balance conflicting interests. After the emergence of record players and radio, US copyright law developed a “permitted-but-paid” regime that curtails the exclusive rights of copyright owners in specific cases. The system prevented the creation of a music monopoly, reducing transaction costs to license sound recordings.

Europe previously introduced, at an early stage, an exception to copyright for private copying, ensuring authors an appropriate share of revenues for the reproduction of their works. It imposes a levy on the purchase of mobile phones, iPads, and other devices which reproduce copyright-protected works. The collected funds are distributed among rightsholders, with a significant portion reserved for authors.

A similar system could be developed for AI. Developers should share part of their revenues with the authors whose works are used to train the algorithms. Remuneration can either be established by regulators or negotiated privately. Collective management organizations can administer the system, as they do for copying, record players, and radio.

If no agreement is reached between AI developers and copyright holders, regulators could mediate. The EU should appoint a specialized independent authority on copyright issues to ensure that creators receive a fair share and that the redistribution functions properly. In the absence of a single Europe-wide regulator, national authorities could step in, even if this would be suboptimal to secure a pan-European digital single market.

Of course, remuneration will need to be monitored, preferably by independent authorities. Otherwise, it could create a significant hurdle for AI start-ups. The amount due to rightsholders could be calculated according to the actual or potential economic value of their work. The amount of the remuneration could be adjusted depending on market changes. To reduce the risk that such a mechanism only benefits superstars, redistribution could ensure that niche creators receive their fair share.

Done right, copyright law can help secure a vibrant environment for culture, creativity, and science. Statutory remuneration for AI, in place of the opt-out, would represent a step in the right direction. It would create an appealing environment for AI in Europe – without jeopardizing the livelihood of authors, music performers, filmmakers, and other content creators.

More Insights

AI in Finland’s Government: Compliance and Opportunities for 2025

Finland's government is preparing for the implementation of the EU AI Act, which mandates compliance with general-purpose AI obligations starting August 2, 2025. This guide outlines the legal and...

AI Governance in East Asia: Strategies from South Korea, Japan, and Taiwan

As AI becomes a defining force in global innovation, South Korea, Japan, and Taiwan are establishing distinct regulatory frameworks to oversee its use, each aiming for more innovation-friendly...

Ensuring Ethical Compliance in AI-Driven Insurance

As insurance companies increasingly integrate AI into their processes, they face regulatory scrutiny and ethical challenges that necessitate transparency and fairness. New regulations aim to minimize...

False Confidence in the EU AI Act: Understanding the Epistemic Gaps

The European Commission's final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice has sparked discussions about its implications for AI regulation, revealing an epistemic...

Transforming AI Governance: The EU Act’s Framework Against Super AI Risks

The EU AI Act establishes a risk-based framework that categorizes AI systems based on their potential harm, imposing strict regulations on high-risk and prohibited uses to enhance human oversight and...

EU AI Act: Key Changes and Future Implications

The EU AI Act reached a significant milestone on August 2, 2025, marking the beginning of real obligations for general-purpose AI models. Providers must now meet specific requirements to enter the EU...

AI Copyright Dilemma in the EU

The European Union's implementation of the Artificial Intelligence Act introduces new guidelines that aim to balance AI growth with copyright compliance, but this creates significant challenges for...

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full...

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...