AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act and Copyright Compliance

The EU AI Act presents a significant framework for ensuring compliance with copyright laws in the realm of artificial intelligence, particularly for generative AI models like large language models (LLMs). As these models require extensive datasets for training, understanding the legal ramifications of using copyrighted content is crucial.

Challenges in Training Generative AI Models

Training generative AI models necessitates vast amounts of data, including text, images, and other content, which are often sourced through web scraping from publicly available materials. The EU AI Act emphasizes the necessity of compliance with copyright laws, particularly as they concern LLMs.

Recital 105 of the Act notes that the development and training of general-purpose AI models require access to extensive datasets. It states that any usage of copyrighted content mandates authorization from the rightsholder unless exceptions apply.

Definition of General-Purpose AI Models

The Act defines general-purpose AI models as those that are trained on large datasets exhibiting significant generality and capable of performing a wide array of tasks. Examples include:

  • ChatGPT and Google’s PaLM — known for tasks such as code generation, translation, and joke explanation.
  • Claude by Anthropic — adept at content creation, vision analysis, and addressing complex inquiries.

While the AI Act primarily addresses general-purpose AI providers, it does not exempt other developers from copyright obligations. The Digital Single Market (DSM) directive remains applicable to all users of copyrighted works, marking an initial legislative effort to tackle copyright issues stemming from AI training via web scraping.

DSM Directive Provisions on AI Training and Copyright

The DSM directive introduced a text and data mining exception to copyright protection. This exception encompasses a broad range of computational analyses, including search engine indexing and data scraping for AI training. However, the directive was enacted in 2019, prior to the emergence of generative AI tools, indicating that lawmakers may not have fully considered the implications of LLMs on copyrighted content.

Typically, web scraping of copyrighted materials for AI training is permissible under the DSM directive, provided that rightsholders have not opted out explicitly. Rightsholders can reserve their rights through machine-readable means, such as technical protocols that web crawlers can recognize.

AI Act Requirements for Copyright Compliance

Article 53 of the AI Act imposes two primary obligations on general-purpose AI providers:

  1. Implement a policy that complies with EU copyright law, particularly regarding the identification and adherence to rights reservations in the DSM directive.
  2. Publish a detailed summary of the content used for training, promoting transparency and allowing creators to verify whether their works have been utilized in training.

General-Purpose AI Code of Practice: Copyright Section

The AI Act encourages general-purpose AI providers to develop industry best practices, referred to as codes of conduct. A recent draft of the General-Purpose AI Code of Practice outlines measures to ensure copyright protection compliance under the Act. Notably, it includes a commitment to:

  • Identify and comply with rights reservations when crawling the web.

Respecting Machine-Readable Opt-Outs

Signatories of the draft code are urged to utilize crawlers that adhere to instructions specified by the Robot Exclusion Protocol. The robots.txt file is a standard tool that websites employ to regulate how web crawlers access and index their content, providing directions on which areas should not be crawled. However, it is essential to understand that robots.txt only guides compliant bots and does not prevent access to copyrighted works.

Despite robots.txt being the most commonly respected protocol, the lack of a unified standard for rights reservations complicates the landscape for general-purpose AI providers.

Types of Protocols for Opt-Out Compliance

In the discourse on compliance, protocols can be categorized into two main types:

  • Location-based protocols (e.g., robots.txt, ai.txt) apply to all content on a website.
  • Unit-based protocols enable tagging specific works with metadata that indicates the creator’s wish to opt-out of AI training.

The code encourages the identification of protocols that result from a cross-industry standard-setting process, aiming for a unified rights reservation approach.

Risks of a Unified Protocol

While a unified opt-out protocol may benefit large AI providers, it poses risks, such as limiting options for authors who may wish to utilize alternative methods for protecting their works. Furthermore, the AI Act’s copyright requirements extend extraterritorially, obligating any general-purpose AI provider entering the EU market to establish a copyright compliance policy, regardless of the training location.

In summary, as generative AI technology continues to evolve, navigating the complexities of copyright compliance under the EU AI Act will be crucial for developers and researchers alike. The ongoing development of best practices and standards will play a pivotal role in shaping the future of AI and copyright interactions.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...