Finalizing the GPAI Code: A Crucial Test for Europe’s AI Future

How to Finalise the GPAI Code: A Test of Europe’s Commitment to AI Innovation

The finalisation of the General-Purpose AI (GPAI) Code of Practice represents a significant milestone in Europe’s journey towards establishing a robust framework for AI regulation. As the European Commission’s AI Office prepares to adopt the third draft of this code, stakeholders are increasingly concerned about the implications of regulatory overreach and the clarity of the text. This article examines the key challenges and recommendations surrounding the GPAI Code.

1. The Risk of Regulatory Overreach

The latest draft of the GPAI Code introduces several measures that extend beyond the legal framework established by the AI Act. These measures include:

  • New definitions for open source AI models.
  • Rights-reservation protocols allowing rightsholders to impose opt-outs from data mining.
  • Ambiguous EU standardisation processes that could lead to regional fragmentation.
  • Mandatory external assessments that had previously been rejected during negotiations.

This extensive list of new requirements raises concerns about the viability of the code as a voluntary compliance tool. If the measures do not align with the AI Act, signatories may find themselves at a disadvantage compared to non-signatories, ultimately undermining the purpose of the GPAI Code.

2. Clarity and Practicality Challenges

Another significant issue with the current draft is its persistent lack of clarity. Key deliverables from the AI Office, such as guidelines on GPAI rules and templates for public disclosure of training data, are still pending. The absence of these elements leaves the Code incomplete and difficult to interpret.

For instance, the introduction of concepts like ‘safe-originator model’ and ‘safely derived model’ requires alignment with forthcoming guidelines but currently lacks clear definitions. Furthermore, there are concerns about how trade secret protections will be maintained amidst new transparency requirements.

3. Ensuring a Strong Finish: Final Recommendations

The success of the GPAI Code hinges on three critical questions:

  • Is it aligned with the AI Act?
  • Is it clear and proportionate?
  • Is it practical?

As the drafting process nears completion, these questions should guide the experts and EU Member States. It is crucial that the final Code supports companies in complying with the AI Act while also fostering an environment conducive to AI innovation and competitiveness within Europe.

Conclusion

The finalisation of the GPAI Code is not just a regulatory exercise but a test of Europe’s ambition to lead in AI innovation. A balanced, clear, and practical Code of Practice will empower companies to navigate compliance challenges while enhancing their capacity to innovate. As the EU moves forward, aligning its regulatory framework with the realities of the AI landscape will be essential for maintaining global competitiveness.

More Insights

AI Governance: Essential Insights for Tech and Security Professionals

Artificial intelligence (AI) is significantly impacting various business domains, including cybersecurity, with many organizations adopting generative AI for security purposes. As AI governance...

Government Under Fire for Rapid Facial Recognition Adoption

The UK government has faced criticism for the rapid rollout of facial recognition technology without establishing a comprehensive legal framework. Concerns have been raised about privacy...

AI Governance Start-Ups Surge Amid Growing Demand for Ethical Solutions

As the demand for AI technologies surges, so does the need for governance solutions to ensure they operate ethically and securely. The global AI governance industry is projected to grow significantly...

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...