European Commission Unveils AI Code of Practice for General-Purpose Models

European Commission’s General-Purpose AI Code of Practice

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice (the “AI Code”), just three weeks before the obligations related to general-purpose AI models under the EU AI Act are set to take effect.

Compliance with the AI Code is voluntary, yet it aims to illustrate adherence to certain provisions of the EU AI Act. The European Commission asserts that organizations that commit to following the AI Code will experience a reduced administrative burden and enhanced legal certainty compared to those pursuing alternative compliance methods.

Complementary Guidelines

The AI Code is anticipated to be supplemented by forthcoming guidelines from the European Commission, expected to be released later this month. These guidelines will clarify key concepts pertaining to general-purpose AI models and strive to ensure uniform interpretation and application of these concepts.

Structure of the AI Code

The AI Code is organized into three distinct chapters, each addressing specific compliance aspects under the EU AI Act:

1. Transparency

This chapter establishes a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards.

Significantly, signatories to the AI Code can fulfill the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.

2. Copyright

This chapter explains how to demonstrate compliance with Article 53(1)(c) of the EU AI Act, which mandates that providers establish a policy to comply with EU copyright law and recognize expressed reservations of rights.

The AI Code outlines several measures to ensure compliance with Article 53(1)(c), including the implementation of a copyright policy that incorporates the other measures of the chapter and the designation of a point of contact for copyright-related complaints.

3. Safety and Security

This chapter is specifically applicable to providers responsible for general-purpose AI models that pose systemic risks and pertains to the obligations under Article 55 of the EU AI Act.

The chapter elaborates on the measures necessary to assess and mitigate risks associated with these advanced models. This includes:

  • Creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation.
  • Implementing appropriate safety and security measures.
  • Developing a model report that contains details about the AI model, systemic risk assessment, and mitigation processes, which may be shared with the AI Office.

In summary, the European Commission’s General-Purpose AI Code of Practice represents a significant step towards regulating AI technologies, offering a structured approach for organizations to demonstrate compliance while addressing critical issues such as transparency, copyright, and safety.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...