EU AI Code Delay: Implications for General Purpose AI Compliance

AI Act Deadline Missed as EU GPAI Code Delayed Until August

The final version of the EU’s General Purpose AI (GPAI) Code of Practice was due to be published by 2 May. However, the deadline has elapsed without the anticipated release, prompting concern among stakeholders and observers.

The EU AI Office has confirmed that the GPAI Code has been delayed, with the final version now expected to be published “by August.” This delay raises questions about the timeline, especially since the provisions related to GPAI model providers under the EU AI Act are set to come into effect on 2 August.

What is the GPAI Code?

The GPAI Code serves as a voluntary code of practice designed to assist providers of GPAI models in demonstrating compliance with their obligations under Articles 53 and 56 of the EU AI Act. These obligations encompass crucial areas such as transparency, copyright, and safety and security.

While the majority of commitments in the GPAI Code primarily apply to providers of GPAI models with systemic risk, a few commitments apply to all GPAI model providers entering the EU market. One such commitment involves copyright measures, which have stirred controversy and garnered significant attention.

Reasons for the Delay

As of now, the EU AI Office has not publicly explained the reasons behind the delay. However, press reports suggest two main factors influencing the decision:

  1. To provide participants more time to offer feedback on the third draft of the GPAI Code.
  2. To allow stakeholders to respond to the EU Commission’s ongoing consultation on proposed draft GPAI guidelines, which aims to clarify certain obligations of GPAI model providers under the EU AI Act.

This consultation is open until 22 May and poses critical questions such as: What constitutes a GPAI model? Who qualifies as a “provider”? What does “placing on the market” entail? Additionally, it provides guidance on the implications of signing and adhering to the GPAI Code.

There is speculation that the delay may also allow the EU AI Office to assess the level of support for the GPAI Code from major AI providers. The ultimate success of this Code hinges on whether GPAI model providers commit to it.

Commentary on the Delay

This delay was not entirely unexpected. Achieving consensus among stakeholders regarding the GPAI Code was always a challenging task, especially given the contentious issues it covers, such as copyright. Previous attempts by governments, including the UK, to navigate similar challenges have met with limited success.

The divergence of opinions on various issues raises the possibility that a political solution may be necessary. The AI Act stipulates that if the GPAI Code is not finalized by 2 August, or if the final draft is deemed inadequate, the EU Commission may introduce “common rules” through an implementing act.

Additional Challenges for AI Developers

It is important to note that the challenges posed by the GPAI Code are not the only hurdles facing AI model developers in the EU. They are also contending with inquiries from European data regulators regarding GDPR compliance, particularly concerning the use of personal data in training AI models.

For instance, in April 2025, the Irish Data Protection Commission announced an investigation into the use of publicly accessible posts from EU users on the X platform for training its Grok LLMs, focusing on the legality and transparency of processing personal data. Similarly, a German consumer rights association has recently cautioned Meta regarding its AI training plans that utilize content from Facebook and Instagram, with backing from the privacy advocacy group noyb.

More Insights

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...

China’s Unique Approach to Embodied AI

China's approach to artificial intelligence emphasizes the development of "embodied AI," which interacts with the physical environment, leveraging the country's strengths in manufacturing and...

Workday Sets New Standards in Responsible AI Governance

Workday has recently received dual third-party accreditations for its AI Governance Program, highlighting its commitment to responsible and transparent AI. Dr. Kelly Trindle, Chief Responsible AI...

AI Adoption in UK Finance: Balancing Innovation and Compliance

A recent survey by Smarsh reveals that while UK finance workers are increasingly adopting AI tools, there are significant concerns regarding compliance and oversight. Many employees express a desire...

AI Ethics Amid US-China Tensions: A Call for Global Standards

As the US-China tech rivalry intensifies, a UN agency is advocating for global AI ethics standards, highlighted during UNESCO's Global Forum on the Ethics of Artificial Intelligence in Bangkok...

Mastering Compliance with the EU AI Act Through Advanced DSPM Solutions

The EU AI Act emphasizes the importance of compliance for organizations deploying AI technologies, with Zscaler’s Data Security Posture Management (DSPM) playing a crucial role in ensuring data...

US Lawmakers Push to Ban Adversarial AI Amid National Security Concerns

A bipartisan group of U.S. lawmakers has introduced the "No Adversarial AI Act," aiming to ban the use of artificial intelligence tools from countries like China, Russia, Iran, and North Korea in...