Brussels on Edge: Awaiting the Third Draft of the GPAI Code of Practice

Quiet Before the Storm: The Anticipation Surrounding the Third GPAI Code of Practice Draft

As anticipation cloaks the Brussels AI scene, stakeholders await the release of the third draft of the guidelines for General Purpose AI (GPAI) systems, originally scheduled for the week of February 17. The delay has heightened tensions among various participants, including Big Tech companies, civil society organizations, and regulatory bodies.

A Climate of Tension

The atmosphere in Brussels is charged with tension. Big Tech firms are expressing reluctance to sign the Code of Practice (CoP), while the European Commission pushes for simplification of the guidelines. Some civil society organizations, feeling marginalized, are contemplating withdrawal from the process, fearing it may legitimize what they perceive as a flawed framework.

One civil society representative ominously noted, “The real battle will likely begin once the third draft is published.” This sentiment reflects a broader unease about the efficacy and inclusivity of the ongoing discussions.

A Legacy of Conflict

The forthcoming draft is not the first instance where industry, civil society, and regulators have clashed over GPAI regulation. The AI Act, intended to address AI implications, was conceptualized before the rise of GPAI models like ChatGPT. As a result, the Act lacks targeted provisions for these flexible models, which can potentially pose systemic risks across various use cases.

In a previous conflict, a Franco-German-Italian coalition attempted to dilute GPAI rules, fearing they would hinder the EU’s competitiveness. This move was met with criticism, particularly after Mistral AI’s partnership with Microsoft prompted questions about the benefits of relaxed GPAI requirements.

Legislative Challenges

In August, discussions surrounding a controversial California AI bill highlighted the challenges faced in both the U.S. and EU regulatory landscapes. While experts suggested that the California bill could enhance EU regulations, it ultimately succumbed to industry pressure and was vetoed by Governor Gavin Newsom.

The EU’s CoP is poised to become the most detailed legally-backed guidelines for GPAI providers, potentially setting a global standard for best practices. As it enters its final phase, the stakes continue to rise.

Upcoming Timeline

The third draft is critical, serving as the last version before the final Code is circulated, due by May 2. Following this, the Code will undergo approval by the Commission’s AI Office and member states, with implementation expected by August 2.

Recent communications from the AI Office indicate that the workshop titled “CSO perspectives on the Code of Practice,” originally scheduled for March 7, will be rescheduled to allow for adequate review time of the draft, emphasizing the importance of meaningful feedback.

Key Battlegrounds

Three significant issues have emerged in the ongoing discussions:

  • Mandatory third-party testing: A coalition advocating for rigorous safety measures argues that self-testing by companies does not ensure compliance.
  • Risk taxonomy: Concerns are raised about the adequacy of the risk taxonomy, with human rights advocates emphasizing the need for comprehensive coverage of fundamental rights.
  • Copyright transparency: Rightsholders are pushing for strict copyright rules and detailed summaries of training data by GPAI companies.

Industry representatives argue that these requirements exceed the provisions of the AI Act, claiming that the CoP should merely serve as a compliance tool. They express concerns about the vagueness of the risk taxonomy and its grounding in real-world risks.

Conclusion

As the climate in Brussels remains fraught with tension and anticipation, the upcoming third GPAI Code of Practice draft is set to shape the future of AI regulation. With conflicting interests at play, the outcome could significantly influence how GPAI systems are governed moving forward.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...