EU Guidelines on AI Use Met with Massive Criticism
The recent publication of the EU Code of Conduct on artificial intelligence (AI) has sparked widespread criticism, particularly with the enforcement deadline looming just two weeks away. This code serves as a supplement to the EU AI Act, aimed at regulating the deployment of AI technologies.
Overview of the Code of Practice
The General Purpose AI Code of Practice (GPAI Code of Practice) represents the EU’s inaugural effort to establish guidelines for the regulation of general AI. The primary objective of this code is to simplify compliance with the EU AI Act, which is set to come into effect on August 2, 2025, with practical implementation starting in 2026.
The Code of Practice is divided into three main chapters:
- Transparency: This chapter provides a user-friendly template for documentation, enabling providers to meet the transparency obligations outlined in Article 53 of the AI Act.
- Copyright: The copyright chapter offers practical solutions for compliance with EU copyright law, also in accordance with Article 53 of the AI Act.
- Safety and Security: This section outlines advanced practices for addressing systemic risks associated with AI models, applicable primarily to providers of general-purpose AI models with systemic risks (Article 55 of the AI Act).
Criticism from Stakeholders
Despite the intention behind the guidelines, they have been met with significant backlash from various stakeholders, including lobby groups, CEOs, and non-governmental organizations (NGOs).
Bitkom’s Perspective
The German digital association Bitkom acknowledges the guidelines as a potential avenue for creating legal certainty in AI development in Europe. However, Bitkom also highlights critical points regarding the complexity and bureaucratic burden of the proposed regulations. Susanne Dehmel, a member of Bitkom’s management board, warns that the Code of Practice must not hinder Europe’s AI competitiveness. She emphasizes the need to improve vague audit requirements and reduce bureaucratic pressures.
Voices from EU CEOs
In an open letter, over 45 top executives voiced concerns about the EU’s regulatory approach to AI, cautioning that the complexity of the regulations could undermine competitiveness. They advocate for a two-year postponement of the implementation of the EU AI Act. This letter was initiated by the EU AI Champions Initiative, which represents around 110 EU companies, including major players such as Mercedes-Benz and Airbus.
Calls for a New AI Act
Some industry leaders, such as Roland Busch (CEO of Siemens) and Christian Klein (CEO of SAP), argue that the current framework is inadequate. They advocate for a fundamental revision of the EU AI Act to foster innovation rather than stifle it, labeling the existing regulations as “toxic” to the development of digital business models.
Concerns from NGOs
The NGO The Future Society has expressed its worries that U.S. tech companies have succeeded in diluting critical regulations during the drafting process. Nick Moës, the executive director, states that the weakened code disadvantages European citizens and businesses and compromises security and accountability.
Key Points of Criticism
The Future Society outlines four primary areas of concern:
- Delayed Information Sharing: The AI office will only receive essential information post-market launch, allowing potentially harmful models to reach users unchecked.
- Inadequate Whistleblower Protection: The need for robust internal whistleblower protections in AI companies is emphasized, highlighting the importance of safeguarding information from within.
- Lack of Emergency Planning: The absence of mandatory emergency response protocols is criticized, especially given the rapid spread of damage caused by general-purpose AI.
- Extensive Provider Control: New lobbying results allow providers to self-identify risks and manage their evaluation processes, raising concerns about accountability.
The EU’s approach to regulating AI remains a contentious topic, with ongoing debates about the balance between fostering innovation and ensuring safety and accountability in AI technologies. As the enforcement date approaches, stakeholders continue to voice their concerns, pushing for revisions that could significantly impact the future landscape of AI in Europe.