EU Introduces New Code to Streamline AI Compliance

EU Publishes Voluntary Code of Practice for AI Compliance

The European Union has recently unveiled a new voluntary code of practice designed to assist companies across its 27-member bloc in adhering to the forthcoming AI Act. This regulation represents a comprehensive framework that will govern the usage of artificial intelligence within the EU.

Context and Background

As the EU prepares to enforce the AI Act’s rules on general purpose AI, which will come into effect on 2 August 2024, the newly released code aims to provide guidance for organizations navigating this complex landscape. Full enforcement of the AI Act is anticipated to commence at least a year later.

Key Focus Areas of the Code

The code addresses three critical areas:

  • Transparency Obligations: This pertains to providers integrating general purpose AI models into their products.
  • Copyright Protections: Ensuring that intellectual property rights are upheld in AI applications.
  • Safety and Security: Focused on the robust functioning of advanced AI systems.

This framework assists firms by clarifying the compliance requirements of the AI Act, which evaluates AI use cases based on their associated risk levels, ranging from minimal to unacceptable. Non-compliance with the AI Act may result in severe penalties, including fines of up to €35 million or 7% of a company’s global revenue.

Understanding General Purpose AI

General purpose AI refers to systems capable of executing a broad spectrum of tasks, such as OpenAI’s ChatGPT. These models are foundational to numerous AI applications currently operating across various sectors within the EU. The code aims to provide a practical pathway for businesses grappling with the intricacies of the full legislation.

Industry Response and Concerns

Despite the EU’s intentions, the regulation has encountered rising criticism from segments of the industry. Recently, over 40 European companies—including major names like Airbus, Mercedes-Benz, and Philips—signed an open letter advocating for a two-year delay in the implementation of the AI Act. The letter expressed worries about the “unclear, overlapping and increasingly complex” regulatory demands, suggesting that these could jeopardize Europe’s competitive edge in the global AI arena.

EU’s Stance and Future Outlook

Despite these appeals, the European Commission has indicated no intention to postpone the rollout of the AI Act. It continues to stress the significance of responsible AI development. Henna Virkkunen, Executive Vice President for Tech Sovereignty, Security, and Democracy at the European Commission, remarked, “Today’s publication of the final version of the Code of Practice for general-purpose AI marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent.”

The publication of this code represents a critical development in the EU’s regulatory landscape for AI, aiming to balance innovation with safety and compliance.

More Insights

Southeast Asia’s Unique Approach to AI Safety Governance

Southeast Asia's approach to AI safety governance combines localized regulation with regional coordination, addressing the diverse cultural and political landscape of the region. The report outlines...

Comparing AI Action Plans: U.S. vs. China

In July, both the United States and China unveiled their national AI Action Plans, showcasing different approaches to AI development and governance. Despite their contrasting ideologies, the two...

Private Governance: The Future of AI Regulation

Private governance and regulatory sandboxes are essential for promoting democracy, efficiency, and innovation in AI regulation. This approach allows for agile and accountable experimentation that can...

Egypt Champions Ethical AI for Inclusive Development

Egypt's Minister of Planning and Economic Development, Rania Al-Mashat, emphasized the importance of robust governance frameworks for artificial intelligence to ensure it benefits society ethically...

Strengthening AI Governance for Fair Credit Access in Kenya

Kenya is at a critical juncture in utilizing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without strong governance, AI-driven credit scoring may...

Governance Challenges for Multi-Agent AI Systems

The article discusses the urgent need for governance frameworks to manage the interactions of multi-agent AI systems, highlighting the risks posed by their autonomous decision-making capabilities. It...

Addressing AI-Driven Online Threats with Safety by Design

The rapid growth of artificial intelligence (AI) is reshaping the digital landscape, amplifying existing online harms and introducing new safety risks, particularly through the use of deepfakes. A...

AI Governance: Strategies for Managing Risk in a Fragmented Regulatory Landscape

The article discusses the significant regulatory uncertainty surrounding global AI oversight and the importance of building governance frameworks to manage AI risks. Michael Berger from Munich Re...

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...