European Commission Unveils AI Code of Practice for General-Purpose Models

European Commission’s General-Purpose AI Code of Practice

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice (the “AI Code”), just three weeks before the obligations related to general-purpose AI models under the EU AI Act are set to take effect.

Compliance with the AI Code is voluntary, yet it aims to illustrate adherence to certain provisions of the EU AI Act. The European Commission asserts that organizations that commit to following the AI Code will experience a reduced administrative burden and enhanced legal certainty compared to those pursuing alternative compliance methods.

Complementary Guidelines

The AI Code is anticipated to be supplemented by forthcoming guidelines from the European Commission, expected to be released later this month. These guidelines will clarify key concepts pertaining to general-purpose AI models and strive to ensure uniform interpretation and application of these concepts.

Structure of the AI Code

The AI Code is organized into three distinct chapters, each addressing specific compliance aspects under the EU AI Act:

1. Transparency

This chapter establishes a framework for providers of general-purpose AI models to demonstrate compliance with their obligations under Articles 53(1)(a) and (b) of the EU AI Act. It outlines the necessary documentation and practices required to meet transparency standards.

Significantly, signatories to the AI Code can fulfill the EU AI Act’s transparency requirements by maintaining information in a model documentation form (included in the chapter) which may be requested by the AI Office or a national competent authority.

2. Copyright

This chapter explains how to demonstrate compliance with Article 53(1)(c) of the EU AI Act, which mandates that providers establish a policy to comply with EU copyright law and recognize expressed reservations of rights.

The AI Code outlines several measures to ensure compliance with Article 53(1)(c), including the implementation of a copyright policy that incorporates the other measures of the chapter and the designation of a point of contact for copyright-related complaints.

3. Safety and Security

This chapter is specifically applicable to providers responsible for general-purpose AI models that pose systemic risks and pertains to the obligations under Article 55 of the EU AI Act.

The chapter elaborates on the measures necessary to assess and mitigate risks associated with these advanced models. This includes:

  • Creating and adopting a framework detailing the processes and measures for systemic risk assessment and mitigation.
  • Implementing appropriate safety and security measures.
  • Developing a model report that contains details about the AI model, systemic risk assessment, and mitigation processes, which may be shared with the AI Office.

In summary, the European Commission’s General-Purpose AI Code of Practice represents a significant step towards regulating AI technologies, offering a structured approach for organizations to demonstrate compliance while addressing critical issues such as transparency, copyright, and safety.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...