EU AI Guidelines Spark Controversy and Demand for Revision

EU Guidelines on AI Use Met with Massive Criticism

The recent publication of the EU Code of Conduct on artificial intelligence (AI) has sparked widespread criticism, particularly with the enforcement deadline looming just two weeks away. This code serves as a supplement to the EU AI Act, aimed at regulating the deployment of AI technologies.

Overview of the Code of Practice

The General Purpose AI Code of Practice (GPAI Code of Practice) represents the EU’s inaugural effort to establish guidelines for the regulation of general AI. The primary objective of this code is to simplify compliance with the EU AI Act, which is set to come into effect on August 2, 2025, with practical implementation starting in 2026.

The Code of Practice is divided into three main chapters:

  • Transparency: This chapter provides a user-friendly template for documentation, enabling providers to meet the transparency obligations outlined in Article 53 of the AI Act.
  • Copyright: The copyright chapter offers practical solutions for compliance with EU copyright law, also in accordance with Article 53 of the AI Act.
  • Safety and Security: This section outlines advanced practices for addressing systemic risks associated with AI models, applicable primarily to providers of general-purpose AI models with systemic risks (Article 55 of the AI Act).

Criticism from Stakeholders

Despite the intention behind the guidelines, they have been met with significant backlash from various stakeholders, including lobby groups, CEOs, and non-governmental organizations (NGOs).

Bitkom’s Perspective

The German digital association Bitkom acknowledges the guidelines as a potential avenue for creating legal certainty in AI development in Europe. However, Bitkom also highlights critical points regarding the complexity and bureaucratic burden of the proposed regulations. Susanne Dehmel, a member of Bitkom’s management board, warns that the Code of Practice must not hinder Europe’s AI competitiveness. She emphasizes the need to improve vague audit requirements and reduce bureaucratic pressures.

Voices from EU CEOs

In an open letter, over 45 top executives voiced concerns about the EU’s regulatory approach to AI, cautioning that the complexity of the regulations could undermine competitiveness. They advocate for a two-year postponement of the implementation of the EU AI Act. This letter was initiated by the EU AI Champions Initiative, which represents around 110 EU companies, including major players such as Mercedes-Benz and Airbus.

Calls for a New AI Act

Some industry leaders, such as Roland Busch (CEO of Siemens) and Christian Klein (CEO of SAP), argue that the current framework is inadequate. They advocate for a fundamental revision of the EU AI Act to foster innovation rather than stifle it, labeling the existing regulations as “toxic” to the development of digital business models.

Concerns from NGOs

The NGO The Future Society has expressed its worries that U.S. tech companies have succeeded in diluting critical regulations during the drafting process. Nick Moës, the executive director, states that the weakened code disadvantages European citizens and businesses and compromises security and accountability.

Key Points of Criticism

The Future Society outlines four primary areas of concern:

  • Delayed Information Sharing: The AI office will only receive essential information post-market launch, allowing potentially harmful models to reach users unchecked.
  • Inadequate Whistleblower Protection: The need for robust internal whistleblower protections in AI companies is emphasized, highlighting the importance of safeguarding information from within.
  • Lack of Emergency Planning: The absence of mandatory emergency response protocols is criticized, especially given the rapid spread of damage caused by general-purpose AI.
  • Extensive Provider Control: New lobbying results allow providers to self-identify risks and manage their evaluation processes, raising concerns about accountability.

The EU’s approach to regulating AI remains a contentious topic, with ongoing debates about the balance between fostering innovation and ensuring safety and accountability in AI technologies. As the enforcement date approaches, stakeholders continue to voice their concerns, pushing for revisions that could significantly impact the future landscape of AI in Europe.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...