EU AI Act Enforces Initial Compliance Requirements

First Requirements of the AI Act Come Into Effect

On February 2, 2025, the first requirements under the European Union AI Act officially came into effect. This landmark legislation includes a ban on the use of AI systems that involve prohibited AI practices and mandates that providers and users of AI systems ensure they possess sufficient AI literacy to effectively operate these technologies.

Pending Guidance from the European Commission

Despite the urgency surrounding the implementation of the Act, guidance from the European Commission is still pending. This has led many companies to independently design and implement their own strategies and compliance plans in the absence of formal directives.

Engagement at Knowledge Sharing Forums

In light of the new regulations, law firms are increasingly engaging in knowledge sharing forums. For instance, Clifford Chance is preparing for the global AI Action Summit in Paris. During the satellite event known as the AI Fringe on February 11, Dessislava Savova, a partner at Clifford Chance, will moderate a discussion focused on delivering trustworthy AI during challenging times. The panel will feature notable experts, including Brendan Kelleher from SoftBank and Laurent Daudet from LightOn.

Timeline of the EU AI Act

The EU AI Act was officially enforced on August 1, 2024, with its requirements being implemented over a staggered timeline. Most provisions are scheduled for implementation by August 2, 2026, affecting any systems utilized within the EU.

Literacy Requirements

The Act stipulates that providers and deployers of AI systems must take appropriate measures to ensure that their staff possess adequate skills, knowledge, and understanding of AI systems. It is crucial that these individuals are aware of the potential risks and harm that AI can cause.

Prohibited AI Practices

Prohibited AI practices are defined as those that are considered harmful and abusive, contradicting the values of the Union, the rule of law, and fundamental rights. The Act bans systems that:

  • Are manipulative or deceptive
  • Exploit vulnerabilities, such as age or socio-economic status
  • Score individuals based on behavior or personality traits
  • Profile individuals to predict criminal behavior
  • Create or expand facial recognition databases through untargeted scraping of the internet and CCTV footage
  • Infer emotions or categorize individuals based on race or political beliefs

Further Reading

For more insights on the bans that came into effect, a paper from Mayer Brown provides additional context and analysis of the EU AI Act and its implications.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...