Brussels on Edge: Awaiting the Third Draft of the GPAI Code of Practice

Quiet Before the Storm: The Anticipation Surrounding the Third GPAI Code of Practice Draft

As anticipation cloaks the Brussels AI scene, stakeholders await the release of the third draft of the guidelines for General Purpose AI (GPAI) systems, originally scheduled for the week of February 17. The delay has heightened tensions among various participants, including Big Tech companies, civil society organizations, and regulatory bodies.

A Climate of Tension

The atmosphere in Brussels is charged with tension. Big Tech firms are expressing reluctance to sign the Code of Practice (CoP), while the European Commission pushes for simplification of the guidelines. Some civil society organizations, feeling marginalized, are contemplating withdrawal from the process, fearing it may legitimize what they perceive as a flawed framework.

One civil society representative ominously noted, “The real battle will likely begin once the third draft is published.” This sentiment reflects a broader unease about the efficacy and inclusivity of the ongoing discussions.

A Legacy of Conflict

The forthcoming draft is not the first instance where industry, civil society, and regulators have clashed over GPAI regulation. The AI Act, intended to address AI implications, was conceptualized before the rise of GPAI models like ChatGPT. As a result, the Act lacks targeted provisions for these flexible models, which can potentially pose systemic risks across various use cases.

In a previous conflict, a Franco-German-Italian coalition attempted to dilute GPAI rules, fearing they would hinder the EU’s competitiveness. This move was met with criticism, particularly after Mistral AI’s partnership with Microsoft prompted questions about the benefits of relaxed GPAI requirements.

Legislative Challenges

In August, discussions surrounding a controversial California AI bill highlighted the challenges faced in both the U.S. and EU regulatory landscapes. While experts suggested that the California bill could enhance EU regulations, it ultimately succumbed to industry pressure and was vetoed by Governor Gavin Newsom.

The EU’s CoP is poised to become the most detailed legally-backed guidelines for GPAI providers, potentially setting a global standard for best practices. As it enters its final phase, the stakes continue to rise.

Upcoming Timeline

The third draft is critical, serving as the last version before the final Code is circulated, due by May 2. Following this, the Code will undergo approval by the Commission’s AI Office and member states, with implementation expected by August 2.

Recent communications from the AI Office indicate that the workshop titled “CSO perspectives on the Code of Practice,” originally scheduled for March 7, will be rescheduled to allow for adequate review time of the draft, emphasizing the importance of meaningful feedback.

Key Battlegrounds

Three significant issues have emerged in the ongoing discussions:

  • Mandatory third-party testing: A coalition advocating for rigorous safety measures argues that self-testing by companies does not ensure compliance.
  • Risk taxonomy: Concerns are raised about the adequacy of the risk taxonomy, with human rights advocates emphasizing the need for comprehensive coverage of fundamental rights.
  • Copyright transparency: Rightsholders are pushing for strict copyright rules and detailed summaries of training data by GPAI companies.

Industry representatives argue that these requirements exceed the provisions of the AI Act, claiming that the CoP should merely serve as a compliance tool. They express concerns about the vagueness of the risk taxonomy and its grounding in real-world risks.

Conclusion

As the climate in Brussels remains fraught with tension and anticipation, the upcoming third GPAI Code of Practice draft is set to shape the future of AI regulation. With conflicting interests at play, the outcome could significantly influence how GPAI systems are governed moving forward.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...