European Commission Unveils AI Code of Practice for General-Purpose Models

European Commission’s General-Purpose AI Code of Practice

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice (the “AI Code”), just three weeks before the obligations related to general-purpose AI models under the EU AI Act are set to take effect.

Compliance with the AI Code is voluntary, yet it aims to illustrate adherence to certain provisions of the EU AI Act. The European Commission asserts that organizations that commit to following the AI Code will experience a reduced administrative burden and enhanced legal certainty compared to those pursuing alternative compliance methods.

Complementary Guidelines

The AI Code is anticipated to be supplemented by forthcoming guidelines from the European Commission, expected to be released later this month. These guidelines will clarify key concepts pertaining to general-purpose AI models and strive to ensure uniform interpretation and application of these concepts.

Structure of the AI Code

The AI Code is organized into three distinct chapters, each addressing specific compliance aspects under the EU AI Act:

1. Transparency

This chapter establishes a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards.

Significantly, signatories to the AI Code can fulfill the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.

2. Copyright

This chapter explains how to demonstrate compliance with Article 53(1)(c) of the EU AI Act, which mandates that providers establish a policy to comply with EU copyright law and recognize expressed reservations of rights.

The AI Code outlines several measures to ensure compliance with Article 53(1)(c), including the implementation of a copyright policy that incorporates the other measures of the chapter and the designation of a point of contact for copyright-related complaints.

3. Safety and Security

This chapter is specifically applicable to providers responsible for general-purpose AI models that pose systemic risks and pertains to the obligations under Article 55 of the EU AI Act.

The chapter elaborates on the measures necessary to assess and mitigate risks associated with these advanced models. This includes:

  • Creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation.
  • Implementing appropriate safety and security measures.
  • Developing a model report that contains details about the AI model, systemic risk assessment, and mitigation processes, which may be shared with the AI Office.

In summary, the European Commission’s General-Purpose AI Code of Practice represents a significant step towards regulating AI technologies, offering a structured approach for organizations to demonstrate compliance while addressing critical issues such as transparency, copyright, and safety.

More Insights

Southeast Asia’s Unique Approach to AI Safety Governance

Southeast Asia's approach to AI safety governance combines localized regulation with regional coordination, addressing the diverse cultural and political landscape of the region. The report outlines...

Comparing AI Action Plans: U.S. vs. China

In July, both the United States and China unveiled their national AI Action Plans, showcasing different approaches to AI development and governance. Despite their contrasting ideologies, the two...

Private Governance: The Future of AI Regulation

Private governance and regulatory sandboxes are essential for promoting democracy, efficiency, and innovation in AI regulation. This approach allows for agile and accountable experimentation that can...

Egypt Champions Ethical AI for Inclusive Development

Egypt's Minister of Planning and Economic Development, Rania Al-Mashat, emphasized the importance of robust governance frameworks for artificial intelligence to ensure it benefits society ethically...

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may...

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It...

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A...

AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

The article discusses the significant regulatory uncertainty surrounding global AI oversight and the importance of building governance frameworks to manage AI risks. Michael Berger from Munich Re...

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...