Implementation of the European AI Regulation: Initial Q&A by the CNIL
As of July 12, 2024, the European AI Regulation (AI Act) has been published in the Official Journal of the European Union (JOUE) and will gradually come into effect starting August 1, 2024. This document outlines the implications of the regulation for individuals and organizations, clarifying its requirements and related governance structures.
Overview of the AI Regulation
The AI Regulation represents the world’s first comprehensive legislation on artificial intelligence, aiming to oversee the development, market introduction, and use of AI systems that pose risks to health, safety, or fundamental rights.
Risk Classification
The regulation adopts a risk-based approach by categorizing AI systems into four levels:
- Unacceptable Risk: Practices that contradict EU values and fundamental rights are prohibited. Examples include social scoring, exploiting individuals’ vulnerabilities, and real-time biometric identification used by law enforcement in public spaces.
- High Risk: AI systems that may endanger individual safety or fundamental rights are classified as high-risk and are subject to stringent requirements such as compliance assessments and risk management documentation. These systems are listed in Annex I and III for those integrated into products already under market surveillance.
- Specific Transparency Risk: Certain AI systems are subject to explicit transparency obligations, particularly in cases of clear manipulation risks. This includes the use of chatbots and content generation tools.
- Minimal Risk: For all other AI systems, no specific obligations are mandated. This category includes the majority of AI systems currently in use within the EU.
General-Purpose AI Models
Furthermore, the AI Regulation governs a new category of general-purpose models, particularly in the realm of generative AI. These models, such as large language models (LLMs) from companies like Mistral AI and OpenAI, are defined by their ability to perform a wide range of tasks, complicating their classification under previous categories.
For this category, the regulation stipulates various levels of obligation, from minimal transparency and documentation to comprehensive evaluations and the implementation of systemic risk mitigation measures.
Governance Structure
The governance framework for the AI Regulation operates at two levels: European and national.
European Level Governance
The European governance structure aims for a coherent application of the AI Regulation and is centered around the European AI Committee, comprising high-level representatives from each member state. This committee is supported by advisory forums and independent expert groups to guide its decisions.
National Level Governance
Each EU member state is responsible for designating competent authorities to oversee market surveillance of AI systems. These authorities must appoint a national contact point to facilitate communication with the European Commission and other relevant entities.
CNIL’s Role in Implementing the AI Regulation
The CNIL (French Data Protection Authority) is tasked with ensuring compliance with the General Data Protection Regulation (GDPR), which applies to personal data processed by AI systems, whether or not they fall under the AI Regulation’s specific requirements.
To support stakeholders in understanding their obligations, the CNIL plans to provide guidance that integrates the AI Regulation and GDPR, thereby clarifying the rules applicable to AI development and deployment.
Timeline for Implementation
The AI Regulation will be implemented in phases:
- February 2, 2025: Prohibitions on AI systems presenting unacceptable risks will take effect.
- August 2, 2025: Rules for general-purpose AI models will be applied, and competent authorities will be appointed at the state level.
- August 2, 2026: All provisions of the AI Regulation will become applicable, particularly for high-risk AI systems outlined in Annex III.
- August 2, 2027: Rules for high-risk AI systems listed in Annex I will be enforced.
The phased implementation will rely on harmonized standards developed at the European level to define specific requirements for the relevant AI systems.
Conclusion
The European AI Regulation marks a significant step in legislating artificial intelligence, highlighting the balance between fostering innovation and protecting individuals’ rights. As the regulation takes effect, stakeholders must prepare for compliance while contributing to an ethical AI landscape.