EU AI Act: Implications of New Compliance Rules

EU AI Act: First Rules Take Effect on Prohibited AI Systems and AI Literacy

The European Union’s Artificial Intelligence Act (AI Act), the world’s first comprehensive legal framework on AI, entered into force on August 1, 2024. The AI Act sets out staggered compliance deadlines for various areas it regulates.

The Development

As of February 2, 2025, the AI Act’s first compliance deadline has been reached. At this point, the EU applies the prohibited risk category, effectively prohibiting the use of AI systems deemed to pose “unacceptable risks.” Additionally, AI Act Literacy Rules became applicable on the same day.

Looking Ahead

More compliance deadlines lie ahead in the coming years, alongside the European Commission issuing further guidelines for compliance with the AI Act. The Commission has also released the Second Draft of the General Purpose AI Code of Practice to provide clarity and support consistent compliance for general-purpose AI models.

The goal of the EU’s AI Act is to ensure that AI systems placed on the European market and used within the EU are safe and respect fundamental rights and EU values.

First Compliance Deadline

As of February 2, 2025, the following provisions took effect:

  • Prohibited AI Systems: The AI Act’s prohibited risk category effectively bans the use of AI systems deemed to pose “unacceptable risks.” Prohibited AI systems include tools that perform social scoring, manipulate or exploit individuals, infer emotions in workplace or educational settings, involve real-time biometric identification in public spaces, and engage in untargeted scraping of the internet or CCTV for facial images to build or expand face-recognition databases.
  • AI Act Literacy Rules: The AI Act’s literacy rules require all providers and deployers of AI systems (even those classified as low-risk or no risk) to ensure that their personnel possess a sufficient understanding of AI, including its opportunities and risks, to use AI systems effectively and responsibly. Companies must therefore develop and implement appropriate AI governance policies and training programs for their personnel.

Guidance: Draft General-Purpose AI Code of Practice

The European Commission has issued a Second Draft General-Purpose AI Code of Practice for developers of GPAI models. This draft Code, developed with industry stakeholders, aims to clarify compliance requirements for the AI Act’s consistent and effective application across the EU. The draft Code is expected to be finalized by May 2025 and will serve as a guideline for developers to adhere to the AI Act’s provisions.

Notably, the Commission unveiled a template for summarizing training data used in GPAI models on January 17, 2025. This template is a key component of the forthcoming GPAI Code of Practice.

Risks of Non-Compliance / Enforcement

The AI Act’s prohibitions and obligations apply to companies offering or using AI systems. Violators face significant penalties depending on the nature of the non-compliance, including fines of up to €35 million or 7% of their global annual turnover.

For providers of GPAI models, the Commission may impose a fine of up to €15 million or 3% of the worldwide annual turnover. The AI Office, based in Brussels, will enforce the obligations for providers of GPAI models and support EU Member State national authorities in enforcing the AI Act’s requirements.

Next Compliance Deadlines

The next major compliance deadline is August 2, 2025. By that date, EU Member States must designate national authorities responsible for the AI Act’s enforcement. On this date, rules regarding penalties, governance, and confidentiality will also take effect.

By August 2, 2026, most other AI Act obligations will become effective, including rules applicable to high-risk AI systems used in critical infrastructures, employment and workers management, and access to essential services. Specific transparency requirements for AI systems will also become effective on this date.

By August 2, 2027, providers of GPAI models placed on the market before August 2, 2025, must comply with the AI Act.

Immediate Steps to Take

Companies must assess whether and how the AI Act applies to their AI systems or GPAI models by:

  • Identifying and documenting all AI systems or GPAI models that a company develops or deploys, along with their intended use cases;
  • Classifying all AI systems or GPAI models according to their respective risk categories and compliance requirements;
  • Conducting a compliance gap and risk analysis to identify and address any compliance issues or challenges;
  • Developing and implementing an AI strategy and governance program, including an AI literacy training program for personnel.

Three Key Takeaways

  1. Following the February 2, 2025 compliance deadline on prohibited AI systems and AI literacy rules, companies must act now to assess whether and how the AI Act applies to their AI systems or GPAI models.
  2. With fast-evolving technology and regulatory frameworks, companies should conduct regular audits to review and update internal governance, risk, and compliance programs for AI systems.
  3. Failure to comply with the AI Act can lead to significant penalties, including fines of up to €35 million or 7% of a company’s global annual turnover.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...