Quiet Before the Storm: The Anticipation Surrounding the Third GPAI Code of Practice Draft
As anticipation cloaks the Brussels AI scene, stakeholders await the release of the third draft of the guidelines for General Purpose AI (GPAI) systems, originally scheduled for the week of February 17. The delay has heightened tensions among various participants, including Big Tech companies, civil society organizations, and regulatory bodies.
A Climate of Tension
The atmosphere in Brussels is charged with tension. Big Tech firms are expressing reluctance to sign the Code of Practice (CoP), while the European Commission pushes for simplification of the guidelines. Some civil society organizations, feeling marginalized, are contemplating withdrawal from the process, fearing it may legitimize what they perceive as a flawed framework.
One civil society representative ominously noted, “The real battle will likely begin once the third draft is published.” This sentiment reflects a broader unease about the efficacy and inclusivity of the ongoing discussions.
A Legacy of Conflict
The forthcoming draft is not the first instance where industry, civil society, and regulators have clashed over GPAI regulation. The AI Act, intended to address AI implications, was conceptualized before the rise of GPAI models like ChatGPT. As a result, the Act lacks targeted provisions for these flexible models, which can potentially pose systemic risks across various use cases.
In a previous conflict, a Franco-German-Italian coalition attempted to dilute GPAI rules, fearing they would hinder the EU’s competitiveness. This move was met with criticism, particularly after Mistral AI’s partnership with Microsoft prompted questions about the benefits of relaxed GPAI requirements.
Legislative Challenges
In August, discussions surrounding a controversial California AI bill highlighted the challenges faced in both the U.S. and EU regulatory landscapes. While experts suggested that the California bill could enhance EU regulations, it ultimately succumbed to industry pressure and was vetoed by Governor Gavin Newsom.
The EU’s CoP is poised to become the most detailed legally-backed guidelines for GPAI providers, potentially setting a global standard for best practices. As it enters its final phase, the stakes continue to rise.
Upcoming Timeline
The third draft is critical, serving as the last version before the final Code is circulated, due by May 2. Following this, the Code will undergo approval by the Commission’s AI Office and member states, with implementation expected by August 2.
Recent communications from the AI Office indicate that the workshop titled “CSO perspectives on the Code of Practice,” originally scheduled for March 7, will be rescheduled to allow for adequate review time of the draft, emphasizing the importance of meaningful feedback.
Key Battlegrounds
Three significant issues have emerged in the ongoing discussions:
- Mandatory third-party testing: A coalition advocating for rigorous safety measures argues that self-testing by companies does not ensure compliance.
- Risk taxonomy: Concerns are raised about the adequacy of the risk taxonomy, with human rights advocates emphasizing the need for comprehensive coverage of fundamental rights.
- Copyright transparency: Rightsholders are pushing for strict copyright rules and detailed summaries of training data by GPAI companies.
Industry representatives argue that these requirements exceed the provisions of the AI Act, claiming that the CoP should merely serve as a compliance tool. They express concerns about the vagueness of the risk taxonomy and its grounding in real-world risks.
Conclusion
As the climate in Brussels remains fraught with tension and anticipation, the upcoming third GPAI Code of Practice draft is set to shape the future of AI regulation. With conflicting interests at play, the outcome could significantly influence how GPAI systems are governed moving forward.