EU AI Code Delay: Implications for General Purpose AI Compliance

AI Act Deadline Missed as EU GPAI Code Delayed Until August

The final version of the EU’s General Purpose AI (GPAI) Code of Practice was due to be published by 2 May. However, the deadline has elapsed without the anticipated release, prompting concern among stakeholders and observers.

The EU AI Office has confirmed that the GPAI Code has been delayed, with the final version now expected to be published “by August.” This delay raises questions about the timeline, especially since the provisions related to GPAI model providers under the EU AI Act are set to come into effect on 2 August.

What is the GPAI Code?

The GPAI Code serves as a voluntary code of practice designed to assist providers of GPAI models in demonstrating compliance with their obligations under Articles 53 and 56 of the EU AI Act. These obligations encompass crucial areas such as transparency, copyright, and safety and security.

While the majority of commitments in the GPAI Code primarily apply to providers of GPAI models with systemic risk, a few commitments apply to all GPAI model providers entering the EU market. One such commitment involves copyright measures, which have stirred controversy and garnered significant attention.

Reasons for the Delay

As of now, the EU AI Office has not publicly explained the reasons behind the delay. However, press reports suggest two main factors influencing the decision:

  1. To provide participants more time to offer feedback on the third draft of the GPAI Code.
  2. To allow stakeholders to respond to the EU Commission’s ongoing consultation on proposed draft GPAI guidelines, which aims to clarify certain obligations of GPAI model providers under the EU AI Act.

This consultation is open until 22 May and poses critical questions such as: What constitutes a GPAI model? Who qualifies as a “provider”? What does “placing on the market” entail? Additionally, it provides guidance on the implications of signing and adhering to the GPAI Code.

There is speculation that the delay may also allow the EU AI Office to assess the level of support for the GPAI Code from major AI providers. The ultimate success of this Code hinges on whether GPAI model providers commit to it.

Commentary on the Delay

This delay was not entirely unexpected. Achieving consensus among stakeholders regarding the GPAI Code was always a challenging task, especially given the contentious issues it covers, such as copyright. Previous attempts by governments, including the UK, to navigate similar challenges have met with limited success.

The divergence of opinions on various issues raises the possibility that a political solution may be necessary. The AI Act stipulates that if the GPAI Code is not finalized by 2 August, or if the final draft is deemed inadequate, the EU Commission may introduce “common rules” through an implementing act.

Additional Challenges for AI Developers

It is important to note that the challenges posed by the GPAI Code are not the only hurdles facing AI model developers in the EU. They are also contending with inquiries from European data regulators regarding GDPR compliance, particularly concerning the use of personal data in training AI models.

For instance, in April 2025, the Irish Data Protection Commission announced an investigation into the use of publicly accessible posts from EU users on the X platform for training its Grok LLMs, focusing on the legality and transparency of processing personal data. Similarly, a German consumer rights association has recently cautioned Meta regarding its AI training plans that utilize content from Facebook and Instagram, with backing from the privacy advocacy group noyb.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...