EU Introduces New Code to Streamline AI Compliance

EU Publishes Voluntary Code of Practice for AI Compliance

The European Union has recently unveiled a new voluntary code of practice designed to assist companies across its 27-member bloc in adhering to the forthcoming AI Act. This regulation represents a comprehensive framework that will govern the usage of artificial intelligence within the EU.

Context and Background

As the EU prepares to enforce the AI Act’s rules on general purpose AI, which will come into effect on 2 August 2024, the newly released code aims to provide guidance for organizations navigating this complex landscape. Full enforcement of the AI Act is anticipated to commence at least a year later.

Key Focus Areas of the Code

The code addresses three critical areas:

  • Transparency Obligations: This pertains to providers integrating general purpose AI models into their products.
  • Copyright Protections: Ensuring that intellectual property rights are upheld in AI applications.
  • Safety and Security: Focused on the robust functioning of advanced AI systems.

This framework assists firms by clarifying the compliance requirements of the AI Act, which evaluates AI use cases based on their associated risk levels, ranging from minimal to unacceptable. Non-compliance with the AI Act may result in severe penalties, including fines of up to €35 million or 7% of a company’s global revenue.

Understanding General Purpose AI

General purpose AI refers to systems capable of executing a broad spectrum of tasks, such as OpenAI’s ChatGPT. These models are foundational to numerous AI applications currently operating across various sectors within the EU. The code aims to provide a practical pathway for businesses grappling with the intricacies of the full legislation.

Industry Response and Concerns

Despite the EU’s intentions, the regulation has encountered rising criticism from segments of the industry. Recently, over 40 European companies—including major names like Airbus, Mercedes-Benz, and Philips—signed an open letter advocating for a two-year delay in the implementation of the AI Act. The letter expressed worries about the “unclear, overlapping and increasingly complex” regulatory demands, suggesting that these could jeopardize Europe’s competitive edge in the global AI arena.

EU’s Stance and Future Outlook

Despite these appeals, the European Commission has indicated no intention to postpone the rollout of the AI Act. It continues to stress the significance of responsible AI development. Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security, and Democracy at the European Commission, remarked, “Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent.”

The publication of this code represents a critical development in the EU’s regulatory landscape for AI, aiming to balance innovation with safety and compliance.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...