EU AI Code Delay: Implications for General Purpose AI Compliance

AI Act Deadline Missed as EU GPAI Code Delayed Until August

The final version of the EU’s General Purpose AI (GPAI) Code of Practice was due to be published by 2 May. However, the deadline has elapsed without the anticipated release, prompting concern among stakeholders and observers.

The EU AI Office has confirmed that the GPAI Code has been delayed, with the final version now expected to be published “by August.” This delay raises questions about the timeline, especially since the provisions related to GPAI model providers under the EU AI Act are set to come into effect on 2 August.

What is the GPAI Code?

The GPAI Code serves as a voluntary code of practice designed to assist providers of GPAI models in demonstrating compliance with their obligations under Articles 53 and 56 of the EU AI Act. These obligations encompass crucial areas such as transparency, copyright, and safety and security.

While the majority of commitments in the GPAI Code primarily apply to providers of GPAI models with systemic risk, a few commitments apply to all GPAI model providers entering the EU market. One such commitment involves copyright measures, which have stirred controversy and garnered significant attention.

Reasons for the Delay

As of now, the EU AI Office has not publicly explained the reasons behind the delay. However, press reports suggest two main factors influencing the decision:

  1. To provide participants more time to offer feedback on the third draft of the GPAI Code.
  2. To allow stakeholders to respond to the EU Commission’s ongoing consultation on proposed draft GPAI guidelines, which aims to clarify certain obligations of GPAI model providers under the EU AI Act.

This consultation is open until 22 May and poses critical questions such as: What constitutes a GPAI model? Who qualifies as a “provider”? What does “placing on the market” entail? Additionally, it provides guidance on the implications of signing and adhering to the GPAI Code.

There is speculation that the delay may also allow the EU AI Office to assess the level of support for the GPAI Code from major AI providers. The ultimate success of this Code hinges on whether GPAI model providers commit to it.

Commentary on the Delay

This delay was not entirely unexpected. Achieving consensus among stakeholders regarding the GPAI Code was always a challenging task, especially given the contentious issues it covers, such as copyright. Previous attempts by governments, including the UK, to navigate similar challenges have met with limited success.

The divergence of opinions on various issues raises the possibility that a political solution may be necessary. The AI Act stipulates that if the GPAI Code is not finalized by 2 August, or if the final draft is deemed inadequate, the EU Commission may introduce “common rules” through an implementing act.

Additional Challenges for AI Developers

It is important to note that the challenges posed by the GPAI Code are not the only hurdles facing AI model developers in the EU. They are also contending with inquiries from European data regulators regarding GDPR compliance, particularly concerning the use of personal data in training AI models.

For instance, in April 2025, the Irish Data Protection Commission announced an investigation into the use of publicly accessible posts from EU users on the X platform for training its Grok LLMs, focusing on the legality and transparency of processing personal data. Similarly, a German consumer rights association has recently cautioned Meta regarding its AI training plans that utilize content from Facebook and Instagram, with backing from the privacy advocacy group noyb.

More Insights

Empowering AI Through Responsible Innovation

Agentic AI is rapidly becoming integral to enterprise strategies, promising enhanced decision-making and efficiency. However, without a foundation built on responsible AI, even the most advanced...

Canada’s Role in Shaping Global AI Governance at the G7

Canadian Prime Minister Mark Carney has prioritized artificial intelligence governance as the G7 summit approaches, emphasizing the need for international cooperation amidst a competitive global...

Understanding the Impacts of the EU AI Act on Privacy and Business

The EU AI Act, finalized in late 2023, establishes comprehensive regulations governing the use of artificial intelligence by companies operating in Europe, including those based in the U.S. It aims to...

Kazakhstan’s Bold Step Towards Human-Centric AI Regulation

Kazakhstan's draft 'Law on Artificial Intelligence' aims to regulate AI with a human-centric approach, reflecting global trends while prioritizing national values. The legislation, developed through...

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...