The EU’s AI Law: Comprehensive Overview
The European Union’s AI Act has been implemented to establish a regulatory framework that balances AI innovation with necessary safety measures. Launched with the AI Act Explorer on July 18, 2025, this initiative aims to assist companies in navigating compliance with the new regulations.
Purpose and Objectives
The AI Act is designed to introduce safeguards for advanced artificial intelligence models while fostering a competitive environment for AI enterprises. It categorizes AI systems into distinct risk classifications: unacceptable risk, high risk, limited risk, and minimal risk.
According to Henna Virkkunen, EU Commission Executive Vice President for Technological Sovereignty, Security, and Democracy, the guidelines aim to support the smooth application of the AI Act.
Risk Classifications
Under EU law, AI models are categorized based on their risk levels:
- Unacceptable Risk: AI applications in this category are prohibited within the EU. This includes systems like facial recognition and social scoring.
- High Risk: These models require stringent compliance measures and evaluations.
- Limited Risk: Subject to specific obligations but with less strict requirements.
- Minimal Risk: These models face the least regulatory scrutiny.
For instance, applications utilizing over 1025 floating point operations (FLOPs) are deemed as presenting systemic risks. Noteworthy models such as OpenAI’s GPT-4 and others like Google’s Gemini 2.5 Pro fall within this classification.
Compliance Obligations
Manufacturers of AI models identified as posing systemic risks must adhere to specific obligations:
- Conduct comprehensive evaluations to identify potential systemic risks.
- Document adversarial testing performed during risk mitigation.
- Report serious incidents to both EU and national authorities.
- Implement cybersecurity measures to protect against misuse of AI systems.
These requirements place a significant responsibility on AI companies to proactively identify and mitigate risks from the outset.
Financial Penalties for Non-Compliance
The AI Act imposes substantial financial penalties for non-compliance, with fines ranging from €7.5 million (approximately $8.7 million) or 1.5% of a company’s global turnover to a maximum of €35 million or 7% of global turnover. The specific amount is determined by the severity of the violation.
Criticism and Support
Critics of the AI Act argue that its regulations are inconsistent and may stifle innovation. For instance, on July 18, Joel Kaplan from Meta declared that the company would not endorse the EU’s Code of Practice, which is aligned with the AI Act, citing legal uncertainties for developers.
In contrast, proponents believe that the Act will prevent companies from prioritizing profit at the expense of consumer privacy and safety. Companies like Mistral and OpenAI have shown commitment to adhering to the Code of Practice, which is a voluntary mechanism that indicates compliance with binding regulations.
Conclusion
The introduction of the AI Act marks a pivotal moment in the governance of artificial intelligence in the EU, aiming to protect consumers while promoting responsible innovation. As the deadline for compliance approaches, companies must adapt to these new regulations, ensuring their AI models meet the outlined safety standards.