EU AI Code Delay: Implications for General Purpose AI Compliance

AI Act Deadline Missed as EU GPAI Code Delayed Until August

The final version of the EU’s General Purpose AI (GPAI) Code of Practice was due to be published by 2 May. However, the deadline has elapsed without the anticipated release, prompting concern among stakeholders and observers.

The EU AI Office has confirmed that the GPAI Code has been delayed, with the final version now expected to be published “by August.” This delay raises questions about the timeline, especially since the provisions related to GPAI model providers under the EU AI Act are set to come into effect on 2 August.

What is the GPAI Code?

The GPAI Code serves as a voluntary code of practice designed to assist providers of GPAI models in demonstrating compliance with their obligations under Articles 53 and 56 of the EU AI Act. These obligations encompass crucial areas such as transparency, copyright, and safety and security.

While the majority of commitments in the GPAI Code primarily apply to providers of GPAI models with systemic risk, a few commitments apply to all GPAI model providers entering the EU market. One such commitment involves copyright measures, which have stirred controversy and garnered significant attention.

Reasons for the Delay

As of now, the EU AI Office has not publicly explained the reasons behind the delay. However, press reports suggest two main factors influencing the decision:

  1. To provide participants more time to offer feedback on the third draft of the GPAI Code.
  2. To allow stakeholders to respond to the EU Commission’s ongoing consultation on proposed draft GPAI guidelines, which aims to clarify certain obligations of GPAI model providers under the EU AI Act.

This consultation is open until 22 May and poses critical questions such as: What constitutes a GPAI model? Who qualifies as a “provider”? What does “placing on the market” entail? Additionally, it provides guidance on the implications of signing and adhering to the GPAI Code.

There is speculation that the delay may also allow the EU AI Office to assess the level of support for the GPAI Code from major AI providers. The ultimate success of this Code hinges on whether GPAI model providers commit to it.

Commentary on the Delay

This delay was not entirely unexpected. Achieving consensus among stakeholders regarding the GPAI Code was always a challenging task, especially given the contentious issues it covers, such as copyright. Previous attempts by governments, including the UK, to navigate similar challenges have met with limited success.

The divergence of opinions on various issues raises the possibility that a political solution may be necessary. The AI Act stipulates that if the GPAI Code is not finalized by 2 August, or if the final draft is deemed inadequate, the EU Commission may introduce “common rules” through an implementing act.

Additional Challenges for AI Developers

It is important to note that the challenges posed by the GPAI Code are not the only hurdles facing AI model developers in the EU. They are also contending with inquiries from European data regulators regarding GDPR compliance, particularly concerning the use of personal data in training AI models.

For instance, in April 2025, the Irish Data Protection Commission announced an investigation into the use of publicly accessible posts from EU users on the X platform for training its Grok LLMs, focusing on the legality and transparency of processing personal data. Similarly, a German consumer rights association has recently cautioned Meta regarding its AI training plans that utilize content from Facebook and Instagram, with backing from the privacy advocacy group noyb.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...