EU AI Guidelines Spark Controversy and Demand for Revision

EU Guidelines on AI Use Met with Massive Criticism

The recent publication of the EU Code of Conduct on artificial intelligence (AI) has sparked widespread criticism, particularly with the enforcement deadline looming just two weeks away. This code serves as a supplement to the EU AI Act, aimed at regulating the deployment of AI technologies.

Overview of the Code of Practice

The General Purpose AI Code of Practice (GPAI Code of Practice) represents the EU’s inaugural effort to establish guidelines for the regulation of general AI. The primary objective of this code is to simplify compliance with the EU AI Act, which is set to come into effect on August 2, 2025, with practical implementation starting in 2026.

The Code of Practice is divided into three main chapters:

  • Transparency: This chapter provides a user-friendly template for documentation, enabling providers to meet the transparency obligations outlined in Article 53 of the AI Act.
  • Copyright: The copyright chapter offers practical solutions for compliance with EU copyright law, also in accordance with Article 53 of the AI Act.
  • Safety and Security: This section outlines advanced practices for addressing systemic risks associated with AI models, applicable primarily to providers of general-purpose AI models with systemic risks (Article 55 of the AI Act).

Criticism from Stakeholders

Despite the intention behind the guidelines, they have been met with significant backlash from various stakeholders, including lobby groups, CEOs, and non-governmental organizations (NGOs).

Bitkom’s Perspective

The German digital association Bitkom acknowledges the guidelines as a potential avenue for creating legal certainty in AI development in Europe. However, Bitkom also highlights critical points regarding the complexity and bureaucratic burden of the proposed regulations. Susanne Dehmel, a member of Bitkom’s management board, warns that the Code of Practice must not hinder Europe’s AI competitiveness. She emphasizes the need to improve vague audit requirements and reduce bureaucratic pressures.

Voices from EU CEOs

In an open letter, over 45 top executives voiced concerns about the EU’s regulatory approach to AI, cautioning that the complexity of the regulations could undermine competitiveness. They advocate for a two-year postponement of the implementation of the EU AI Act. This letter was initiated by the EU AI Champions Initiative, which represents around 110 EU companies, including major players such as Mercedes-Benz and Airbus.

Calls for a New AI Act

Some industry leaders, such as Roland Busch (CEO of Siemens) and Christian Klein (CEO of SAP), argue that the current framework is inadequate. They advocate for a fundamental revision of the EU AI Act to foster innovation rather than stifle it, labeling the existing regulations as “toxic” to the development of digital business models.

Concerns from NGOs

The NGO The Future Society has expressed its worries that U.S. tech companies have succeeded in diluting critical regulations during the drafting process. Nick Moës, the executive director, states that the weakened code disadvantages European citizens and businesses and compromises security and accountability.

Key Points of Criticism

The Future Society outlines four primary areas of concern:

  • Delayed Information Sharing: The AI office will only receive essential information post-market launch, allowing potentially harmful models to reach users unchecked.
  • Inadequate Whistleblower Protection: The need for robust internal whistleblower protections in AI companies is emphasized, highlighting the importance of safeguarding information from within.
  • Lack of Emergency Planning: The absence of mandatory emergency response protocols is criticized, especially given the rapid spread of damage caused by general-purpose AI.
  • Extensive Provider Control: New lobbying results allow providers to self-identify risks and manage their evaluation processes, raising concerns about accountability.

The EU’s approach to regulating AI remains a contentious topic, with ongoing debates about the balance between fostering innovation and ensuring safety and accountability in AI technologies. As the enforcement date approaches, stakeholders continue to voice their concerns, pushing for revisions that could significantly impact the future landscape of AI in Europe.

More Insights

Responsible AI Principles for .NET Developers

In the era of Artificial Intelligence, trust in AI systems is crucial, especially in sensitive fields like banking and healthcare. This guide outlines Microsoft's six principles of Responsible...

EU AI Act Copyright Compliance Guidelines Unveiled

The EU AI Office has released a more workable draft of the Code of Practice for general-purpose model providers under the EU AI Act, which must be finalized by May 2. This draft outlines compliance...

Building Trust in the Age of AI: Compliance and Customer Confidence

Artificial intelligence holds great potential for marketers, provided it is supported by responsibly collected quality data. A recent panel discussion at the MarTech Conference emphasized the...

AI Transforming Risk and Compliance in Banking

In today's banking landscape, AI has become essential for managing risk and compliance, particularly in India, where regulatory demands are evolving rapidly. Financial institutions must integrate AI...

California’s Landmark AI Transparency Law: A New Era for Frontier Models

California lawmakers have passed a landmark AI transparency law, the Transparency in Frontier Artificial Intelligence Act (SB 53), aimed at enhancing accountability and public trust in advanced AI...

Ireland Establishes National AI Office to Oversee EU Act Implementation

The Government has designated 15 competent authorities under the EU's AI Act and plans to establish a National AI Office by August 2, 2026, to serve as the central coordinating authority in Ireland...

AI Recruitment Challenges and Legal Compliance

The increasing use of AI applications in recruitment offers efficiency benefits but also presents significant legal challenges, particularly under the EU AI Act and GDPR. Employers must ensure that AI...

Building Robust Guardrails for Responsible AI Implementation

As generative AI transforms business operations, deploying AI systems without proper guardrails is akin to driving a Formula 1 car without brakes. To successfully implement AI solutions, organizations...

Inclusive AI for Emerging Markets

Artificial Intelligence is transforming emerging markets, offering opportunities in education, healthcare, and financial inclusion, but also risks widening the digital divide. To ensure equitable...