EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy
The European Union’s Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for various areas it regulates.
The Development
As of February 2, 2025, the AI Act’s first compliance deadline has been reached. At this point, the EU applies the prohibited risk category, effectively prohibiting the use of AI systems deemed to pose “unacceptable risks.” Additionally, AI Act Literacy Rules became applicable on the same day.
Looking Ahead
More compliance deadlines lie ahead in the coming years, alongside the European Commission issuing further guidelines for compliance with the AI Act. The Commission has also released the Second Draft of the General Purpose AI Code of Practice to provide clarity and support consistent compliance for general-purpose AI models.
The goal of the EU’s AI Act is to ensure that AI systems placed on the European market and used within the EU are safe and respect fundamental rights and EU values.
First Compliance Deadline
As of February 2, 2025, the following provisions took effect:
- Prohibited AI Systems: The AI Act’s prohibited risk category effectively bans the use of AI systems deemed to pose “unacceptable risks.” Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer emotions in workplace or educational settings, involve real-time biometric identification in public spaces, and engage in untargeted scraping of the internet or CCTV for facial images to build or expand face-recognition databases.
- AI Act Literacy Rules: The AI Act’s literacy rules require all providers and deployers of AI systems (even those classified as low-risk or no risk) to ensure that their personnel possess a sufficient understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. Companies must therefore develop and implement appropriate AI governance policies and training programs for their personnel.
Guidance: Draft General-Purpose AI Code of Practice
The European Commission has issued a Second Draft General-Purpose AI Code of Practice for developers of GPAI models. This draft Code, developed with industry stakeholders, aims to clarify compliance requirements for the AI Act’s consistent and effective application across the EU. The draft Code is expected to be finalized by May 2025 and will serve as a guideline for developers to adhere to the AI Act’s provisions.
Notably, the Commission unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice.
Risks of Non-Compliance / Enforcement
The AI Act’s prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including fines of up to €35 million or 7% of their global annual turnover.
For providers of GPAI models, the Commission may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models and support EU Member State national authorities in enforcing the AI Act’s requirements.
Next Compliance Deadlines
The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act’s enforcement. On this date, rules regarding penalties, governance, and confidentiality will also take effect.
By August 2, 2026, most other AI Act obligations will become effective, including rules applicable to high-risk AI systems used in critical infrastructures, employment and workers management, and access to essential services. Specific transparency requirements for AI systems will also become effective on this date.
By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025, must comply with the AI Act.
Immediate Steps to Take
Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:
- Identifying and documenting all AI systems or GPAI models that a company develops or deploys, along with their intended use cases;
- Classifying all AI systems or GPAI models according to their respective risk categories and compliance requirements;
- Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges;
- Developing and implementing an AI strategy and governance program, including an AI literacy training program for personnel.
Three Key Takeaways
- Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, companies must act now to assess whether and how the AI Act applies to their AI systems or GPAI models.
- With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
- Failure to comply with the AI Act can lead to significant penalties, including fines of up to €35 million or 7% of a company’s global annual turnover.