Brussels on Edge: Awaiting the Third Draft of the GPAI Code of Practice

Quiet Before the Storm: The Anticipation Surrounding the Third GPAI Code of Practice Draft

As anticipation cloaks the Brussels AI scene, stakeholders await the release of the third draft of the guidelines for General Purpose AI (GPAI) systems, originally scheduled for the week of February 17. The delay has heightened tensions among various participants, including Big Tech companies, civil society organizations, and regulatory bodies.

A Climate of Tension

The atmosphere in Brussels is charged with tension. Big Tech firms are expressing reluctance to sign the Code of Practice (CoP), while the European Commission pushes for simplification of the guidelines. Some civil society organizations, feeling marginalized, are contemplating withdrawal from the process, fearing it may legitimize what they perceive as a flawed framework.

One civil society representative ominously noted, “The real battle will likely begin once the third draft is published.” This sentiment reflects a broader unease about the efficacy and inclusivity of the ongoing discussions.

A Legacy of Conflict

The forthcoming draft is not the first instance where industry, civil society, and regulators have clashed over GPAI regulation. The AI Act, intended to address AI implications, was conceptualized before the rise of GPAI models like ChatGPT. As a result, the Act lacks targeted provisions for these flexible models, which can potentially pose systemic risks across various use cases.

In a previous conflict, a Franco-German-Italian coalition attempted to dilute GPAI rules, fearing they would hinder the EU’s competitiveness. This move was met with criticism, particularly after Mistral AI’s partnership with Microsoft prompted questions about the benefits of relaxed GPAI requirements.

Legislative Challenges

In August, discussions surrounding a controversial California AI bill highlighted the challenges faced in both the U.S. and EU regulatory landscapes. While experts suggested that the California bill could enhance EU regulations, it ultimately succumbed to industry pressure and was vetoed by Governor Gavin Newsom.

The EU’s CoP is poised to become the most detailed legally-backed guidelines for GPAI providers, potentially setting a global standard for best practices. As it enters its final phase, the stakes continue to rise.

Upcoming Timeline

The third draft is critical, serving as the last version before the final Code is circulated, due by May 2. Following this, the Code will undergo approval by the Commission’s AI Office and member states, with implementation expected by August 2.

Recent communications from the AI Office indicate that the workshop titled “CSO perspectives on the Code of Practice,” originally scheduled for March 7, will be rescheduled to allow for adequate review time of the draft, emphasizing the importance of meaningful feedback.

Key Battlegrounds

Three significant issues have emerged in the ongoing discussions:

  • Mandatory third-party testing: A coalition advocating for rigorous safety measures argues that self-testing by companies does not ensure compliance.
  • Risk taxonomy: Concerns are raised about the adequacy of the risk taxonomy, with human rights advocates emphasizing the need for comprehensive coverage of fundamental rights.
  • Copyright transparency: Rightsholders are pushing for strict copyright rules and detailed summaries of training data by GPAI companies.

Industry representatives argue that these requirements exceed the provisions of the AI Act, claiming that the CoP should merely serve as a compliance tool. They express concerns about the vagueness of the risk taxonomy and its grounding in real-world risks.

Conclusion

As the climate in Brussels remains fraught with tension and anticipation, the upcoming third GPAI Code of Practice draft is set to shape the future of AI regulation. With conflicting interests at play, the outcome could significantly influence how GPAI systems are governed moving forward.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...