The EU AI Act: A New Era for Artificial Intelligence Regulation in Europe
The EU AI Act marks a significant step in regulating artificial intelligence (AI) within Europe. This legislation aims to create a cohesive legal framework that governs the use of AI technologies across all member states, focusing on both innovation and safety.
The Context of the EU AI Act
In February 2025, world leaders convened at the AI Action Summit in Paris, where the European Commission unveiled a €200 billion investment program aimed at positioning Europe as a leader in the AI sector. However, the potential risks associated with AI technologies prompted the EU to draft the Artificial Intelligence Act, which is recognized as the world’s first major regulatory framework for AI.
Defining AI
To establish guidelines, a clear definition of AI was necessary. The EU revised its definition multiple times, ultimately publishing a new description in Article 3(1) of the AI Act:
“AI system” means a machine-based system designed to operate autonomously, which may adapt post-deployment and generates outputs such as predictions, recommendations, or decisions that influence environments.
A Timeline of the EU AI Act
The journey of the EU AI Act began in 2021, with several critical milestones:
- April 2021: Proposal to regulate AI in the EU published.
- December 2023: Provisional agreement reached on the AI Act.
- August 2024: The AI Act published in the Official Journal and officially enters into force.
- February 2025: Prohibitions and requirements on AI systems begin to apply.
- August 2026: Full implementation of the AI Act.
Risk-Based Approach
The EU AI Act adopts a risk-based approach to regulation, balancing safety and innovation. The categorization of AI systems is crucial:
- Low and Minimal Risk: Systems that pose minimal risks, requiring only self-declaration. Examples include spam filters and chatbots.
- High Risk: Systems demanding stringent compliance measures, such as those used in law enforcement and critical infrastructure.
- Unacceptable Risk: Systems banned within the EU due to their threat to fundamental rights, like social credit scoring systems.
General Purpose AI Models
The Act also addresses general purpose AI models (GPAIs), proposing a risk-based approach for transparency and compliance, particularly for models that could pose systemic risks. The threshold for considering GPAI models as a systemic risk is set at a compute capacity exceeding 1025 FLOPS.
Scope and Penalties
All enterprises involved in AI, regardless of location, must comply with the AI Act. Non-compliance can lead to severe penalties:
- Launching Prohibited AI Systems: Fines up to 7% of global annual turnover or €35 million.
- High-Risk AI Breaches: Fines up to 3% of global annual turnover or €15 million.
- Providing Incorrect Information: Fines up to 1% of global annual turnover or €7.5 million.
Preparation for Compliance
Companies must begin preparations for compliance with the AI Act, especially for high-risk AI applications by August 2026. Recommended steps include:
- Create a Needs Analysis: Assess current AI usage, staff qualifications, and necessary training.
- Develop a Training Plan: Ensure employees possess the necessary AI competence.
- Appoint an AI Specialist: While not mandatory, having an expert can facilitate compliance.
- Familiarize with AI Standards: Understand relevant technical standards, including ISO/IEC 42001, to align with regulatory requirements.
The EU AI Act represents a comprehensive framework aimed at fostering a safe and innovative AI landscape in Europe. As organizations adapt to these new regulations, they must remain vigilant to harness the benefits of AI while safeguarding public interests.