EU AI Guidelines Spark Controversy and Demand for Revision

EU Guidelines on AI Use Met with Massive Criticism

The recent publication of the EU Code of Conduct on artificial intelligence (AI) has sparked widespread criticism, particularly with the enforcement deadline looming just two weeks away. This code serves as a supplement to the EU AI Act, aimed at regulating the deployment of AI technologies.

Overview of the Code of Practice

The General Purpose AI Code of Practice (GPAI Code of Practice) represents the EU’s inaugural effort to establish guidelines for the regulation of general AI. The primary objective of this code is to simplify compliance with the EU AI Act, which is set to come into effect on August 2, 2025, with practical implementation starting in 2026.

The Code of Practice is divided into three main chapters:

  • Transparency: This chapter provides a user-friendly template for documentation, enabling providers to meet the transparency obligations outlined in Article 53 of the AI Act.
  • Copyright: The copyright chapter offers practical solutions for compliance with EU copyright law, also in accordance with Article 53 of the AI Act.
  • Safety and Security: This section outlines advanced practices for addressing systemic risks associated with AI models, applicable primarily to providers of general-purpose AI models with systemic risks (Article 55 of the AI Act).

Criticism from Stakeholders

Despite the intention behind the guidelines, they have been met with significant backlash from various stakeholders, including lobby groups, CEOs, and non-governmental organizations (NGOs).

Bitkom’s Perspective

The German digital association Bitkom acknowledges the guidelines as a potential avenue for creating legal certainty in AI development in Europe. However, Bitkom also highlights critical points regarding the complexity and bureaucratic burden of the proposed regulations. Susanne Dehmel, a member of Bitkom’s management board, warns that the Code of Practice must not hinder Europe’s AI competitiveness. She emphasizes the need to improve vague audit requirements and reduce bureaucratic pressures.

Voices from EU CEOs

In an open letter, over 45 top executives voiced concerns about the EU’s regulatory approach to AI, cautioning that the complexity of the regulations could undermine competitiveness. They advocate for a two-year postponement of the implementation of the EU AI Act. This letter was initiated by the EU AI Champions Initiative, which represents around 110 EU companies, including major players such as Mercedes-Benz and Airbus.

Calls for a New AI Act

Some industry leaders, such as Roland Busch (CEO of Siemens) and Christian Klein (CEO of SAP), argue that the current framework is inadequate. They advocate for a fundamental revision of the EU AI Act to foster innovation rather than stifle it, labeling the existing regulations as “toxic” to the development of digital business models.

Concerns from NGOs

The NGO The Future Society has expressed its worries that U.S. tech companies have succeeded in diluting critical regulations during the drafting process. Nick Moës, the executive director, states that the weakened code disadvantages European citizens and businesses and compromises security and accountability.

Key Points of Criticism

The Future Society outlines four primary areas of concern:

  • Delayed Information Sharing: The AI office will only receive essential information post-market launch, allowing potentially harmful models to reach users unchecked.
  • Inadequate Whistleblower Protection: The need for robust internal whistleblower protections in AI companies is emphasized, highlighting the importance of safeguarding information from within.
  • Lack of Emergency Planning: The absence of mandatory emergency response protocols is criticized, especially given the rapid spread of damage caused by general-purpose AI.
  • Extensive Provider Control: New lobbying results allow providers to self-identify risks and manage their evaluation processes, raising concerns about accountability.

The EU’s approach to regulating AI remains a contentious topic, with ongoing debates about the balance between fostering innovation and ensuring safety and accountability in AI technologies. As the enforcement date approaches, stakeholders continue to voice their concerns, pushing for revisions that could significantly impact the future landscape of AI in Europe.

More Insights

Chinese AI Official Advocates for Collaborative Governance to Bridge Development Gaps

An AI official from China emphasized the need for a collaborative and multi-governance ecosystem to promote AI as a public good and bridge the development gap. This call for cooperation highlights the...

Mastering Risk Management in the EU AI Act

The EU AI Act introduces a comprehensive regulation for high-risk AI systems, emphasizing a mandatory Risk Management System (RMS) to proactively manage risks throughout the AI lifecycle. This...

Switzerland’s Approach to AI Regulation: A 2025 Update

Switzerland's National AI Strategy aims to finalize an AI regulatory proposal by 2025, while currently, AI is subject to the Swiss legal framework without specific regulations in place. The Federal...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

The Hidden Dangers of Shadow AI Agents

The article discusses the importance of governance for AI agents, emphasizing that companies must understand and catalogue the AI tools operating within their environments to ensure responsible use...

EU AI Act Compliance: Key Considerations for Businesses Before August 2025

The EU AI Act establishes the world's first comprehensive legal framework for the use and development of artificial intelligence, with key regulations set to take effect in August 2025. Companies must...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...