New Code of Practice for AI Models: Key Compliance Insights

Code of Practice for General-Purpose AI Models: Compliance Just Got Clearer

On July 11, 2025, the European Commission published the final version of its Code of Practice for General-Purpose Artificial Intelligence (GPAI). This Code is designed to assist GPAI model providers in complying with the transparency, copyright, and security provisions outlined in the AI Act (articles 53 and 55), which will become applicable on August 2, 2025. Notably, adherence to this Code is voluntary.

Developed through a multi-stakeholder process involving independent experts, the Code of Practice is structured into three main chapters: Transparency, Copyright, and Safety and Security. The last chapter, which addresses systemic risks posed by GPAI models, is not covered in this alert.

Chapter 1: Transparency

Providers of GPAI models are bound by the transparency obligations set forth in article 53(1)(a) and (b) of the AI Act. They are required to draft and update documentation regarding the functioning of their GPAI models and to share this information with the AI Office, national authorities, and downstream AI system providers utilizing their models. In addition to the specifications listed in Annexes XI and XII of the AI Act, providers can commit to the Transparency chapter of the Code of Practice. Three key measures are outlined:

Measure 1.1 – Documentation Maintenance

Under the AI Act, GPAI model providers must create and maintain documentation that includes, but is not limited to, the tasks their models can perform, acceptable use policies, technical details (architecture, parameters, input/output formats), and licensing information. The obligations vary based on whether the documentation is for the AI Office or downstream providers.

The most significant outcome from the Code of Practice is the Model Documentation Form. This form encapsulates the minimal information GPAI model providers must commit to providing, which they may choose to complete for compliance purposes. The information required is consolidated into a single form, indicating which recipient each piece of information is intended for.

Measure 1.2 – Information Disclosure

Providers must publicly disclose contact information on their websites or through other means to allow the AI Office and downstream providers to request access to essential information, particularly that contained in the Model Documentation.

Measure 1.3 – Quality Assurance

GPAI model providers are responsible for ensuring the quality and integrity of the documented information that serves as evidence of compliance with the AI Act. They are encouraged to adopt established protocols and technical standards to enhance this quality and security.

Chapter 2: Copyright

The AI Act makes explicit references to Union law on copyright and related rights. The intersection between the regulatory obligations of the AI Act and copyright law, which is harmonized at the EU level but governed by national law, presents complexities, especially given the AI Act’s extra-territorial effect.

The Copyright chapter of the Code of Practice provides guidance on GPAI model providers’ obligation to establish a policy that complies with Union copyright law, particularly concerning the exceptions for text and data mining as outlined in article 4(3) of Directive 2019/790 (art. 53(1)(c) of the AI Act). However, the obligation to disclose information about the content used to train GPAI models is not addressed in the Code.

Measure 1.1 – Copyright Policy Implementation

According to the Code, GPAI model providers should develop, maintain, and implement a copyright policy, ideally summarized in a publicly available document.

Measure 1.2 – Compliance with Copyright Laws

Providers using web crawlers to collect training data must commit to not circumventing effective technological protection measures and to excluding websites known for copyright infringement.

Measure 1.3 – Observing Rights Reservations

Web crawlers used by GPAI model providers need to comply with machine-readable rights reservations, such as the Robot Exclusion Protocol (robots.txt). Providers must also ensure that their actions do not have adverse effects on the presentation of content in search results.

Measure 1.4 – Mitigating Copyright Infringement Risks

Providers are required to implement appropriate technical safeguards to prevent infringing outputs and include prohibitions against copyright infringement in their acceptable use policies. This reflects a tension between the AI Act’s best-effort approach and the strict liability standards of many national copyright laws.

Measure 1.5 – Designating Contact Points

Finally, signatories are obliged to appoint a contact point for electronic communication with affected right holders and establish a mechanism to address complaints regarding non-compliance with the copyright commitments outlined in the Code.

Conclusion

Through this Code of Practice, the European Commission is encouraging GPAI model providers to adhere to the AI Act practically, while still leaving room for legal uncertainties regarding its interpretation. The Code clarifies that only the Court of Justice of the European Union’s interpretation is binding.

The Model Documentation Form presents a potentially useful tool; however, complex issues related to the interaction between the AI Act and EU/national copyright law remain urgent and unresolved.

As adherence to the Code of Practice is voluntary and its benefits are not guaranteed, it remains to be seen whether it will significantly impact the internal market, the adoption of human-centric and trustworthy AI, and the high standards of protection for health, safety, and fundamental rights enshrined in the EU Charter.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...