Enforcement of the AI Act: A Comprehensive Overview
The European Union Artificial Intelligence Act (AI Act) came into effect on August 1, 2024. It introduces a risk-based framework for the regulation of AI, categorizing AI systems based on their risk levels and outlining specific prohibitions for certain practices deemed unacceptable, such as social scoring and manipulating human behavior.
One of the fundamental challenges that the AI Act faces is its enforcement. The Act delineates both centralized and decentralized enforcement mechanisms, engaging various actors including national market surveillance authorities, the European Commission via the AI Office, and the European Data Protection Supervisor (EDPS).
1. Market Surveillance Authorities
The enforcement of the AI Act heavily relies on the role of Member States, each of which must designate at least one notifying authority and one market surveillance authority to act as the national competent authorities.
- Notifying Authorities: These entities intervene during the pre-implementation phase of AI systems. Their responsibilities include establishing a framework for conformity assessment bodies and certifying the compliance of high-risk AI systems.
- Market Surveillance Authorities: After an AI system is implemented, these authorities supervise its operation within their jurisdiction. Unlike notifying authorities, they possess the power to impose sanctions for non-compliance.
Market surveillance authorities are endowed with investigative powers as per Regulation (EU) 2019/1020 and can impose administrative fines for various infringements, including:
- Non-compliance with prohibited AI practices, with penalties reaching up to EUR 35 million or 7% of the offender’s total worldwide annual turnover.
- Non-compliance with essential obligations outlined in Article 99(4) of the AI Act, subject to fines up to EUR 15 million or 3% of total annual turnover.
- Providing misleading information to authorities, incurring fines up to EUR 7 million or 1% of total annual turnover.
Complaints regarding potential infringements can be submitted by any individual who suspects non-compliance, which broadens the scope of accountability.
2. European Commission and AI Office
The European Commission holds exclusive powers to supervise obligations concerning general-purpose AI models, delegating tasks to the AI Office. This office can act autonomously or in response to complaints from users of general-purpose models.
Equipped with investigative powers, the AI Office can:
- Request documentation and information from AI model providers
- Conduct compliance evaluations and investigate systemic risks
- Impose fines for non-compliance, limited to 3% of annual turnover or EUR 15 million, whichever is higher.
The AI Office also supervises compliance for AI systems that utilize its general-purpose model, ensuring that developers adhere to the stipulated regulations.
3. European Data Protection Supervisor (EDPS)
The EDPS serves as the market surveillance authority for EU institutions, equipped with similar powers to national authorities but with lower financial penalties. For instance:
- Administrative fines up to EUR 1.5 million for non-compliance with prohibited practices.
- Fines of up to EUR 750,000 for other violations.
4. Cooperation and Coordination
Cooperation among national authorities and the Commission is crucial for effective enforcement. Key mechanisms include:
- Mandatory reporting of non-compliance that transcends national borders.
- Provisional measures to limit the use of non-compliant AI systems.
- Union safeguard procedures where the Commission intervenes in disputes among Member States.
5. Challenges to Implementation
The enforcement framework of the AI Act presents several challenges:
- Lack of a one-stop shop mechanism: Operators face the burden of navigating multiple authorities across different Member States.
- Harmonization issues: Variability in national laws raises concerns regarding procedural aspects and compliance deadlines.
- Double role of the AI Office: Balancing enforcement duties with the development of expertise may compromise impartiality.
- Varying expertise: Differing levels of expertise among member states could lead to inconsistent enforcement of the Act.
As the landscape of artificial intelligence continues to evolve, addressing these challenges will be crucial for the successful enforcement of the AI Act and ensuring the responsible use of AI technologies in the European Union.