EU AI Act: Implications of New Compliance Rules

EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy

The European Union’s Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for various areas it regulates.

The Development

As of February 2, 2025, the AI Act’s first compliance deadline has been reached. At this point, the EU applies the prohibited risk category, effectively prohibiting the use of AI systems deemed to pose “unacceptable risks.” Additionally, AI Act Literacy Rules became applicable on the same day.

Looking Ahead

More compliance deadlines lie ahead in the coming years, alongside the European Commission issuing further guidelines for compliance with the AI Act. The Commission has also released the Second Draft of the General Purpose AI Code of Practice to provide clarity and support consistent compliance for general-purpose AI models.

The goal of the EU’s AI Act is to ensure that AI systems placed on the European market and used within the EU are safe and respect fundamental rights and EU values.

First Compliance Deadline

As of February 2, 2025, the following provisions took effect:

  • Prohibited AI Systems: The AI Act’s prohibited risk category effectively bans the use of AI systems deemed to pose “unacceptable risks.” Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer emotions in workplace or educational settings, involve real-time biometric identification in public spaces, and engage in untargeted scraping of the internet or CCTV for facial images to build or expand face-recognition databases.
  • AI Act Literacy Rules: The AI Act’s literacy rules require all providers and deployers of AI systems (even those classified as low-risk or no risk) to ensure that their personnel possess a sufficient understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. Companies must therefore develop and implement appropriate AI governance policies and training programs for their personnel.

Guidance: Draft General-Purpose AI Code of Practice

The European Commission has issued a Second Draft General-Purpose AI Code of Practice for developers of GPAI models. This draft Code, developed with industry stakeholders, aims to clarify compliance requirements for the AI Act’s consistent and effective application across the EU. The draft Code is expected to be finalized by May 2025 and will serve as a guideline for developers to adhere to the AI Act’s provisions.

Notably, the Commission unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice.

Risks of Non-Compliance / Enforcement

The AI Act’s prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including fines of up to €35 million or 7% of their global annual turnover.

For providers of GPAI models, the Commission may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models and support EU Member State national authorities in enforcing the AI Act’s requirements.

Next Compliance Deadlines

The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act’s enforcement. On this date, rules regarding penalties, governance, and confidentiality will also take effect.

By August 2, 2026, most other AI Act obligations will become effective, including rules applicable to high-risk AI systems used in critical infrastructures, employment and workers management, and access to essential services. Specific transparency requirements for AI systems will also become effective on this date.

By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025, must comply with the AI Act.

Immediate Steps to Take

Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:

  • Identifying and documenting all AI systems or GPAI models that a company develops or deploys, along with their intended use cases;
  • Classifying all AI systems or GPAI models according to their respective risk categories and compliance requirements;
  • Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges;
  • Developing and implementing an AI strategy and governance program, including an AI literacy training program for personnel.

Three Key Takeaways

  1. Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, companies must act now to assess whether and how the AI Act applies to their AI systems or GPAI models.
  2. With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
  3. Failure to comply with the AI Act can lead to significant penalties, including fines of up to €35 million or 7% of a company’s global annual turnover.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...