EU AI Code Delay: Implications for General Purpose AI Compliance

AI Act Deadline Missed as EU GPAI Code Delayed Until August

The final version of the EU’s General Purpose AI (GPAI) Code of Practice was due to be published by 2 May. However, the deadline has elapsed without the anticipated release, prompting concern among stakeholders and observers.

The EU AI Office has confirmed that the GPAI Code has been delayed, with the final version now expected to be published “by August.” This delay raises questions about the timeline, especially since the provisions related to GPAI model providers under the EU AI Act are set to come into effect on 2 August.

What is the GPAI Code?

The GPAI Code serves as a voluntary code of practice designed to assist providers of GPAI models in demonstrating compliance with their obligations under Articles 53 and 56 of the EU AI Act. These obligations encompass crucial areas such as transparency, copyright, and safety and security.

While the majority of commitments in the GPAI Code primarily apply to providers of GPAI models with systemic risk, a few commitments apply to all GPAI model providers entering the EU market. One such commitment involves copyright measures, which have stirred controversy and garnered significant attention.

Reasons for the Delay

As of now, the EU AI Office has not publicly explained the reasons behind the delay. However, press reports suggest two main factors influencing the decision:

  1. To provide participants more time to offer feedback on the third draft of the GPAI Code.
  2. To allow stakeholders to respond to the EU Commission’s ongoing consultation on proposed draft GPAI guidelines, which aims to clarify certain obligations of GPAI model providers under the EU AI Act.

This consultation is open until 22 May and poses critical questions such as: What constitutes a GPAI model? Who qualifies as a “provider”? What does “placing on the market” entail? Additionally, it provides guidance on the implications of signing and adhering to the GPAI Code.

There is speculation that the delay may also allow the EU AI Office to assess the level of support for the GPAI Code from major AI providers. The ultimate success of this Code hinges on whether GPAI model providers commit to it.

Commentary on the Delay

This delay was not entirely unexpected. Achieving consensus among stakeholders regarding the GPAI Code was always a challenging task, especially given the contentious issues it covers, such as copyright. Previous attempts by governments, including the UK, to navigate similar challenges have met with limited success.

The divergence of opinions on various issues raises the possibility that a political solution may be necessary. The AI Act stipulates that if the GPAI Code is not finalized by 2 August, or if the final draft is deemed inadequate, the EU Commission may introduce “common rules” through an implementing act.

Additional Challenges for AI Developers

It is important to note that the challenges posed by the GPAI Code are not the only hurdles facing AI model developers in the EU. They are also contending with inquiries from European data regulators regarding GDPR compliance, particularly concerning the use of personal data in training AI models.

For instance, in April 2025, the Irish Data Protection Commission announced an investigation into the use of publicly accessible posts from EU users on the X platform for training its Grok LLMs, focusing on the legality and transparency of processing personal data. Similarly, a German consumer rights association has recently cautioned Meta regarding its AI training plans that utilize content from Facebook and Instagram, with backing from the privacy advocacy group noyb.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...

AI Alignment: Ensuring Technology Serves Human Values

Gillian K. Hadfield has been appointed as the Bloomberg Distinguished Professor of AI Alignment and Governance at Johns Hopkins University, where she will focus on ensuring that artificial...

The Ethical Dilemma of Face Swap Technology

As AI technology evolves, face swap tools are increasingly misused for creating non-consensual explicit content, leading to significant ethical, emotional, and legal consequences. This article...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...

The Illusion of Influence: The EU AI Act’s Global Reach

The EU AI Act, while aiming to set a regulatory framework for artificial intelligence, faces challenges in influencing other countries due to differing legal and cultural values. This has led to the...