The EU AI Act: A Comprehensive Overview
The EU AI Act (AIA) represents a groundbreaking regulation in the realm of artificial intelligence within the European Union. Officially published in the EU’s Official Journal on 12 July 2024, the AIA is set to significantly influence organizations engaged in the development or utilization of AI technologies, both within the EU and globally. The Act will take effect on 1 August 2024, imposing risk- and technology-based obligations on various stakeholders involved in AI.
Key Features of the AIA
The AIA establishes a framework that categorizes AI systems based on their risk levels, which determines the corresponding regulatory requirements. The principal classifications include:
- Prohibited AI Systems
- High-risk AI Systems (HRAIS)
- General Purpose AI (GPAI)
- Other AI systems
Application of the AIA
Application of the AIA hinges on the specific AI technology, its intended use, and the role of the operator. The Act’s risk-based approach outlines that:
- Certain AI systems will be prohibited.
- High-risk AI systems will face stringent obligations.
- General purpose AI models will be regulated irrespective of their use case.
- Low-risk AI systems will encounter minimal transparency requirements.
Implementation Timeline
The AIA will commence on 1 August 2024, with most provisions applying after a two-year implementation period concluding on 1 August 2026. Notably, prohibitions on certain AI systems and requirements for AI literacy will take effect after just 6 months, while GPAI requirements will follow after 12 months.
Definition of AI System
According to the AIA, an AI system is defined as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment.” This broad definition encompasses various systems that generate outputs influencing environments based on received inputs.
Prohibited AI Systems
The AIA explicitly prohibits the use of certain AI systems, including:
- AI for biometric categorization and identification.
- Systems exploiting vulnerabilities to manipulate human behavior.
- AI for emotion recognition in sensitive contexts.
- Systems for social scoring of individuals based on behavior.
High-risk AI Systems (HRAIS)
High-risk AI systems, or HRAIS, are subject to the most stringent regulatory obligations. These systems are often involved in critical areas such as:
- Management of essential public infrastructure (e.g., utilities).
- Access determination to educational institutions.
- Recruitment and employment processes.
- Migration and law enforcement applications.
- Influencing democratic processes.
- Insurance and banking sectors.
Providers of HRAIS must implement comprehensive risk management systems, data governance measures, and maintain transparency and human oversight throughout their lifecycle.
General Purpose AI (GPAI)
AI models classified as GPAI, which includes foundational and generative AI, are subject to less stringent obligations compared to high-risk systems. Key requirements focus on:
- Issuing technical documentation and compliance with EU copyright law.
- Providing summaries of training data.
Other AI Systems
For AI systems not classified as high-risk or prohibited, the primary requirement is a limited obligation for transparency. Providers must ensure users are aware they are interacting with an AI system, alongside a general obligation of AI literacy for staff managing these systems.
Financial Penalties
The AIA imposes significant financial penalties for non-compliance, ranging from €7.5 million (or 1.5% of global annual turnover) to €35 million (or 7% of global annual turnover), depending on the nature of the infringement and the size of the company.
This comprehensive overview of the EU AI Act underscores its potential to reshape the landscape of AI regulation, ensuring safety, transparency, and accountability in the deployment of AI technologies.