New Code of Practice for AI Models: Key Compliance Insights

Code of Practice for General-Purpose AI Models: Compliance Just Got Clearer

On July 11, 2025, the European Commission published the final version of its Code of Practice for General-Purpose Artificial Intelligence (GPAI). This Code is designed to assist GPAI model providers in complying with the transparency, copyright, and security provisions outlined in the AI Act (articles 53 and 55), which will become applicable on August 2, 2025. Notably, adherence to this Code is voluntary.

Developed through a multi-stakeholder process involving independent experts, the Code of Practice is structured into three main chapters: Transparency, Copyright, and Safety and Security. The last chapter, which addresses systemic risks posed by GPAI models, is not covered in this alert.

Chapter 1: Transparency

Providers of GPAI models are bound by the transparency obligations set forth in article 53(1)(a) and (b) of the AI Act. They are required to draft and update documentation regarding the functioning of their GPAI models and to share this information with the AI Office, national authorities, and downstream AI system providers utilizing their models. In addition to the specifications listed in Annexes XI and XII of the AI Act, providers can commit to the Transparency chapter of the Code of Practice. Three key measures are outlined:

Measure 1.1 – Documentation Maintenance

Under the AI Act, GPAI model providers must create and maintain documentation that includes, but is not limited to, the tasks their models can perform, acceptable use policies, technical details (architecture, parameters, input/output formats), and licensing information. The obligations vary based on whether the documentation is for the AI Office or downstream providers.

The most significant outcome from the Code of Practice is the Model Documentation Form. This form encapsulates the minimal information GPAI model providers must commit to providing, which they may choose to complete for compliance purposes. The information required is consolidated into a single form, indicating which recipient each piece of information is intended for.

Measure 1.2 – Information Disclosure

Providers must publicly disclose contact information on their websites or through other means to allow the AI Office and downstream providers to request access to essential information, particularly that contained in the Model Documentation.

Measure 1.3 – Quality Assurance

GPAI model providers are responsible for ensuring the quality and integrity of the documented information that serves as evidence of compliance with the AI Act. They are encouraged to adopt established protocols and technical standards to enhance this quality and security.

Chapter 2: Copyright

The AI Act makes explicit references to Union law on copyright and related rights. The intersection between the regulatory obligations of the AI Act and copyright law, which is harmonized at the EU level but governed by national law, presents complexities, especially given the AI Act’s extra-territorial effect.

The Copyright chapter of the Code of Practice provides guidance on GPAI model providers’ obligation to establish a policy that complies with Union copyright law, particularly concerning the exceptions for text and data mining as outlined in article 4(3) of Directive 2019/790 (art. 53(1)(c) of the AI Act). However, the obligation to disclose information about the content used to train GPAI models is not addressed in the Code.

Measure 1.1 – Copyright Policy Implementation

According to the Code, GPAI model providers should develop, maintain, and implement a copyright policy, ideally summarized in a publicly available document.

Measure 1.2 – Compliance with Copyright Laws

Providers using web crawlers to collect training data must commit to not circumventing effective technological protection measures and to excluding websites known for copyright infringement.

Measure 1.3 – Observing Rights Reservations

Web crawlers used by GPAI model providers need to comply with machine-readable rights reservations, such as the Robot Exclusion Protocol (robots.txt). Providers must also ensure that their actions do not have adverse effects on the presentation of content in search results.

Measure 1.4 – Mitigating Copyright Infringement Risks

Providers are required to implement appropriate technical safeguards to prevent infringing outputs and include prohibitions against copyright infringement in their acceptable use policies. This reflects a tension between the AI Act’s best-effort approach and the strict liability standards of many national copyright laws.

Measure 1.5 – Designating Contact Points

Finally, signatories are obliged to appoint a contact point for electronic communication with affected right holders and establish a mechanism to address complaints regarding non-compliance with the copyright commitments outlined in the Code.

Conclusion

Through this Code of Practice, the European Commission is encouraging GPAI model providers to adhere to the AI Act practically, while still leaving room for legal uncertainties regarding its interpretation. The Code clarifies that only the Court of Justice of the European Union’s interpretation is binding.

The Model Documentation Form presents a potentially useful tool; however, complex issues related to the interaction between the AI Act and EU/national copyright law remain urgent and unresolved.

As adherence to the Code of Practice is voluntary and its benefits are not guaranteed, it remains to be seen whether it will significantly impact the internal market, the adoption of human-centric and trustworthy AI, and the high standards of protection for health, safety, and fundamental rights enshrined in the EU Charter.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...