EU AI Act: Implications of New Compliance Rules

EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy

The European Union’s Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for various areas it regulates.

The Development

As of February 2, 2025, the AI Act’s first compliance deadline has been reached. At this point, the EU applies the prohibited risk category, effectively prohibiting the use of AI systems deemed to pose “unacceptable risks.” Additionally, AI Act Literacy Rules became applicable on the same day.

Looking Ahead

More compliance deadlines lie ahead in the coming years, alongside the European Commission issuing further guidelines for compliance with the AI Act. The Commission has also released the Second Draft of the General Purpose AI Code of Practice to provide clarity and support consistent compliance for general-purpose AI models.

The goal of the EU’s AI Act is to ensure that AI systems placed on the European market and used within the EU are safe and respect fundamental rights and EU values.

First Compliance Deadline

As of February 2, 2025, the following provisions took effect:

  • Prohibited AI Systems: The AI Act’s prohibited risk category effectively bans the use of AI systems deemed to pose “unacceptable risks.” Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer emotions in workplace or educational settings, involve real-time biometric identification in public spaces, and engage in untargeted scraping of the internet or CCTV for facial images to build or expand face-recognition databases.
  • AI Act Literacy Rules: The AI Act’s literacy rules require all providers and deployers of AI systems (even those classified as low-risk or no risk) to ensure that their personnel possess a sufficient understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. Companies must therefore develop and implement appropriate AI governance policies and training programs for their personnel.

Guidance: Draft General-Purpose AI Code of Practice

The European Commission has issued a Second Draft General-Purpose AI Code of Practice for developers of GPAI models. This draft Code, developed with industry stakeholders, aims to clarify compliance requirements for the AI Act’s consistent and effective application across the EU. The draft Code is expected to be finalized by May 2025 and will serve as a guideline for developers to adhere to the AI Act’s provisions.

Notably, the Commission unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice.

Risks of Non-Compliance / Enforcement

The AI Act’s prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including fines of up to €35 million or 7% of their global annual turnover.

For providers of GPAI models, the Commission may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models and support EU Member State national authorities in enforcing the AI Act’s requirements.

Next Compliance Deadlines

The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act’s enforcement. On this date, rules regarding penalties, governance, and confidentiality will also take effect.

By August 2, 2026, most other AI Act obligations will become effective, including rules applicable to high-risk AI systems used in critical infrastructures, employment and workers management, and access to essential services. Specific transparency requirements for AI systems will also become effective on this date.

By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025, must comply with the AI Act.

Immediate Steps to Take

Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:

  • Identifying and documenting all AI systems or GPAI models that a company develops or deploys, along with their intended use cases;
  • Classifying all AI systems or GPAI models according to their respective risk categories and compliance requirements;
  • Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges;
  • Developing and implementing an AI strategy and governance program, including an AI literacy training program for personnel.

Three Key Takeaways

  1. Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, companies must act now to assess whether and how the AI Act applies to their AI systems or GPAI models.
  2. With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
  3. Failure to comply with the AI Act can lead to significant penalties, including fines of up to €35 million or 7% of a company’s global annual turnover.

More Insights

CII Advocates for Strong AI Accountability in Financial Services

The Chartered Insurance Institute (CII) has urged for clear accountability frameworks and a skills strategy for the use of artificial intelligence (AI) in financial services. They emphasize the...

Regulating AI in APAC MedTech: Current Trends and Future Directions

The regulatory landscape for AI-enabled MedTech in the Asia Pacific region is still developing, with existing frameworks primarily governing other technologies. While countries like China, Japan, and...

New York’s AI Legislation: Key Changes Employers Must Know

In early 2025, New York proposed the NY AI Act and the AI Consumer Protection Act to regulate the use of artificial intelligence, particularly addressing algorithmic discrimination in employment...

Managing AI Risks: Effective Frameworks for Safe Implementation

This article discusses the importance of AI risk management frameworks to mitigate potential risks associated with artificial intelligence systems. It highlights various types of risks, including...

Essential Insights on the EU Artificial Intelligence Act for Tech Companies

The European Union has introduced the Artificial Intelligence Act (AI Act), which aims to manage the risks and opportunities associated with AI technologies across Europe. This landmark regulation...

South Korea’s Landmark AI Basic Act: A New Era of Regulation

South Korea has established itself as a leader in AI regulation in Asia with the introduction of the AI Basic Act, which creates a comprehensive legal framework for artificial intelligence. This...

EU AI Act and DORA: Mastering Compliance in Financial Services

The EU AI Act and DORA are reshaping how financial entities manage AI risk by introducing new layers of compliance that demand transparency, accountability, and quantifiable risk assessments...

AI Governance: Bridging the Transatlantic Divide

Artificial intelligence (AI) is rapidly reshaping economies, societies, and global governance, presenting both significant opportunities and risks. This chapter examines the divergent approaches of...

EU’s Ambitious Plan to Boost AI Development

The EU Commission is launching a new strategy to reduce barriers for the deployment of artificial intelligence (AI) across Europe, aiming to enhance the region's competitiveness on a global scale. The...