Brussels on Edge: Awaiting the Third Draft of the GPAI Code of Practice

Quiet Before the Storm: The Anticipation Surrounding the Third GPAI Code of Practice Draft

As anticipation cloaks the Brussels AI scene, stakeholders await the release of the third draft of the guidelines for General Purpose AI (GPAI) systems, originally scheduled for the week of February 17. The delay has heightened tensions among various participants, including Big Tech companies, civil society organizations, and regulatory bodies.

A Climate of Tension

The atmosphere in Brussels is charged with tension. Big Tech firms are expressing reluctance to sign the Code of Practice (CoP), while the European Commission pushes for simplification of the guidelines. Some civil society organizations, feeling marginalized, are contemplating withdrawal from the process, fearing it may legitimize what they perceive as a flawed framework.

One civil society representative ominously noted, “The real battle will likely begin once the third draft is published.” This sentiment reflects a broader unease about the efficacy and inclusivity of the ongoing discussions.

A Legacy of Conflict

The forthcoming draft is not the first instance where industry, civil society, and regulators have clashed over GPAI regulation. The AI Act, intended to address AI implications, was conceptualized before the rise of GPAI models like ChatGPT. As a result, the Act lacks targeted provisions for these flexible models, which can potentially pose systemic risks across various use cases.

In a previous conflict, a Franco-German-Italian coalition attempted to dilute GPAI rules, fearing they would hinder the EU’s competitiveness. This move was met with criticism, particularly after Mistral AI’s partnership with Microsoft prompted questions about the benefits of relaxed GPAI requirements.

Legislative Challenges

In August, discussions surrounding a controversial California AI bill highlighted the challenges faced in both the U.S. and EU regulatory landscapes. While experts suggested that the California bill could enhance EU regulations, it ultimately succumbed to industry pressure and was vetoed by Governor Gavin Newsom.

The EU’s CoP is poised to become the most detailed legally-backed guidelines for GPAI providers, potentially setting a global standard for best practices. As it enters its final phase, the stakes continue to rise.

Upcoming Timeline

The third draft is critical, serving as the last version before the final Code is circulated, due by May 2. Following this, the Code will undergo approval by the Commission’s AI Office and member states, with implementation expected by August 2.

Recent communications from the AI Office indicate that the workshop titled “CSO perspectives on the Code of Practice,” originally scheduled for March 7, will be rescheduled to allow for adequate review time of the draft, emphasizing the importance of meaningful feedback.

Key Battlegrounds

Three significant issues have emerged in the ongoing discussions:

  • Mandatory third-party testing: A coalition advocating for rigorous safety measures argues that self-testing by companies does not ensure compliance.
  • Risk taxonomy: Concerns are raised about the adequacy of the risk taxonomy, with human rights advocates emphasizing the need for comprehensive coverage of fundamental rights.
  • Copyright transparency: Rightsholders are pushing for strict copyright rules and detailed summaries of training data by GPAI companies.

Industry representatives argue that these requirements exceed the provisions of the AI Act, claiming that the CoP should merely serve as a compliance tool. They express concerns about the vagueness of the risk taxonomy and its grounding in real-world risks.

Conclusion

As the climate in Brussels remains fraught with tension and anticipation, the upcoming third GPAI Code of Practice draft is set to shape the future of AI regulation. With conflicting interests at play, the outcome could significantly influence how GPAI systems are governed moving forward.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...