EU Introduces New Code to Streamline AI Compliance

EU Publishes Voluntary Code of Practice for AI Compliance

The European Union has recently unveiled a new voluntary code of practice designed to assist companies across its 27-member bloc in adhering to the forthcoming AI Act. This regulation represents a comprehensive framework that will govern the usage of artificial intelligence within the EU.

Context and Background

As the EU prepares to enforce the AI Act’s rules on general purpose AI, which will come into effect on 2 August 2024, the newly released code aims to provide guidance for organizations navigating this complex landscape. Full enforcement of the AI Act is anticipated to commence at least a year later.

Key Focus Areas of the Code

The code addresses three critical areas:

  • Transparency Obligations: This pertains to providers integrating general purpose AI models into their products.
  • Copyright Protections: Ensuring that intellectual property rights are upheld in AI applications.
  • Safety and Security: Focused on the robust functioning of advanced AI systems.

This framework assists firms by clarifying the compliance requirements of the AI Act, which evaluates AI use cases based on their associated risk levels, ranging from minimal to unacceptable. Non-compliance with the AI Act may result in severe penalties, including fines of up to €35 million or 7% of a company’s global revenue.

Understanding General Purpose AI

General purpose AI refers to systems capable of executing a broad spectrum of tasks, such as OpenAI’s ChatGPT. These models are foundational to numerous AI applications currently operating across various sectors within the EU. The code aims to provide a practical pathway for businesses grappling with the intricacies of the full legislation.

Industry Response and Concerns

Despite the EU’s intentions, the regulation has encountered rising criticism from segments of the industry. Recently, over 40 European companies—including major names like Airbus, Mercedes-Benz, and Philips—signed an open letter advocating for a two-year delay in the implementation of the AI Act. The letter expressed worries about the “unclear, overlapping and increasingly complex” regulatory demands, suggesting that these could jeopardize Europe’s competitive edge in the global AI arena.

EU’s Stance and Future Outlook

Despite these appeals, the European Commission has indicated no intention to postpone the rollout of the AI Act. It continues to stress the significance of responsible AI development. Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security, and Democracy at the European Commission, remarked, “Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent.”

The publication of this code represents a critical development in the EU’s regulatory landscape for AI, aiming to balance innovation with safety and compliance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...