EU AI Act: Implications of New Compliance Rules

EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy

The European Union’s Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for various areas it regulates.

The Development

As of February 2, 2025, the AI Act’s first compliance deadline has been reached. At this point, the EU applies the prohibited risk category, effectively prohibiting the use of AI systems deemed to pose “unacceptable risks.” Additionally, AI Act Literacy Rules became applicable on the same day.

Looking Ahead

More compliance deadlines lie ahead in the coming years, alongside the European Commission issuing further guidelines for compliance with the AI Act. The Commission has also released the Second Draft of the General Purpose AI Code of Practice to provide clarity and support consistent compliance for general-purpose AI models.

The goal of the EU’s AI Act is to ensure that AI systems placed on the European market and used within the EU are safe and respect fundamental rights and EU values.

First Compliance Deadline

As of February 2, 2025, the following provisions took effect:

  • Prohibited AI Systems: The AI Act’s prohibited risk category effectively bans the use of AI systems deemed to pose “unacceptable risks.” Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer emotions in workplace or educational settings, involve real-time biometric identification in public spaces, and engage in untargeted scraping of the internet or CCTV for facial images to build or expand face-recognition databases.
  • AI Act Literacy Rules: The AI Act’s literacy rules require all providers and deployers of AI systems (even those classified as low-risk or no risk) to ensure that their personnel possess a sufficient understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. Companies must therefore develop and implement appropriate AI governance policies and training programs for their personnel.

Guidance: Draft General-Purpose AI Code of Practice

The European Commission has issued a Second Draft General-Purpose AI Code of Practice for developers of GPAI models. This draft Code, developed with industry stakeholders, aims to clarify compliance requirements for the AI Act’s consistent and effective application across the EU. The draft Code is expected to be finalized by May 2025 and will serve as a guideline for developers to adhere to the AI Act’s provisions.

Notably, the Commission unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice.

Risks of Non-Compliance / Enforcement

The AI Act’s prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including fines of up to €35 million or 7% of their global annual turnover.

For providers of GPAI models, the Commission may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models and support EU Member State national authorities in enforcing the AI Act’s requirements.

Next Compliance Deadlines

The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act’s enforcement. On this date, rules regarding penalties, governance, and confidentiality will also take effect.

By August 2, 2026, most other AI Act obligations will become effective, including rules applicable to high-risk AI systems used in critical infrastructures, employment and workers management, and access to essential services. Specific transparency requirements for AI systems will also become effective on this date.

By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025, must comply with the AI Act.

Immediate Steps to Take

Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:

  • Identifying and documenting all AI systems or GPAI models that a company develops or deploys, along with their intended use cases;
  • Classifying all AI systems or GPAI models according to their respective risk categories and compliance requirements;
  • Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges;
  • Developing and implementing an AI strategy and governance program, including an AI literacy training program for personnel.

Three Key Takeaways

  1. Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, companies must act now to assess whether and how the AI Act applies to their AI systems or GPAI models.
  2. With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
  3. Failure to comply with the AI Act can lead to significant penalties, including fines of up to €35 million or 7% of a company’s global annual turnover.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...