EU Sets AI Compliance Rules for High-Risk and Foundation Models
On July 22, 2025, the European Commission introduced comprehensive guidelines aimed at ensuring compliance with the EU’s artificial intelligence law. This legislation, effective from August 2, 2025, targets AI models that pose systemic risks as well as general-purpose models.
Key Compliance Requirements
Under the AI Act, companies are mandated to:
- Evaluate their models to assess potential risks.
- Test for potential threats that could arise from AI implementation.
- Report serious incidents promptly to relevant authorities.
- Implement robust cybersecurity protections to safeguard against breaches.
The guidelines specifically target firms developing powerful AI systems that have the capacity to influence public health, safety, and fundamental rights. Notable companies affected include Google, OpenAI, Meta, Anthropic, and Mistral.
Transparency and Documentation
In addition to the above requirements, general-purpose or foundation models must adhere to strict transparency rules. These involve:
- Preparing thorough technical documentation.
- Adopting appropriate copyright practices.
- Summarizing sources of training data to ensure accountability.
The AI Act was enacted last year, and firms are afforded a timeline until August 2, 2025, to satisfy all stipulated requirements.
Consequences of Non-Compliance
Failure to comply with the regulations can lead to severe penalties, including fines reaching up to €35 million or 7% of a company’s global turnover, depending on the nature of the violation.
Support for Enforcement
In a recent statement, the EU’s technology chief emphasized that these guidelines are designed to facilitate the smooth enforcement of the law, aiming for a balanced approach to innovation and safety in AI development.