EU AI Act Compliance: Key Considerations for Businesses Before August 2025

EU AI Act: Key Compliance Considerations Ahead of August 2025

The European Commission has made it clear: the timetable for implementing the Artificial Intelligence Act (EU AI Act) remains unchanged. There are no plans for transition periods or postponements. The first regulations have been in force since February 2, 2025, while further key obligations will become binding on August 2, 2025. Violations of the AI Act may be punished with significant penalties, including fines of up to EUR 35 million or 7% of global annual turnover.

The AI Act marks the world’s first comprehensive legal framework for using and developing AI. It follows a risk-based approach that links regulatory requirements to the specific risk an AI system entails. Implementation may pose structural, technical, and governance-related challenges for companies, particularly in the area of general-purpose AI (GPAI).

Key Requirements and Compliance Obligations Under the EU AI Act

The AI Act focuses on risky and prohibited AI practices. Certain applications have been expressly prohibited since February 2, 2025. These include, among others:

  • Biometric categorization based on sensitive characteristics;
  • Emotion recognition systems in the workplace;
  • Manipulative systems that influence human behavior without being noticed;
  • Social scoring.

These prohibitions apply comprehensively—both to the development and to the mere use of such systems.

On August 2, 2025, comprehensive due diligence, transparency, and documentation requirements will also take effect for various actors along the AI value chain.

The German legislature is expected to entrust the Federal Network Agency (Bundesnetzagentur) with regulatory oversight. The agency has already set up a central point of contact, the AI Service Desk, to serve as a first point of contact for small and medium-sized enterprises, particularly for questions relating to the AI Act’s practical implementation. Companies should closely monitor regulatory developments, for example regarding the final Code of Practice for GPAI models, which was published on July 10, and the harmonization of technical standards, which may become the “best practice” benchmark for compliance.

Which Companies and Stakeholders Are Impacted by the EU AI Act?

General-Purpose AI (GPAI) Providers

Providers of GPAI models—such as large language or multimodal models—will be subject to a specific regulatory regime beginning August 2025. They will be required to maintain technical documentation that makes the model’s development, training, and evaluation traceable. In addition, transparency reports must be prepared that describe the capabilities, limitations, potential risks, and guidance for integrators.

A summary of the training data used must also be published. This must include data types, sources, and preprocessing methods. The use of copyright-protected content must be documented and legally permissible. At the same time, providers must ensure the protection of confidential information.

GPAIs with Systemic Risk

Extended obligations apply to particularly powerful GPAI models that are classified as “systemic.” Classification is based on technical criteria such as computing power, range, or potential impact. Providers of such models must report the system to the European Commission, undergo structured evaluation and testing procedures, and permanently document security incidents. In addition, increased requirements apply in the area of cybersecurity and monitoring.

Downstream Providers and Modifiers

Companies that substantially modify existing GPAI models will themselves become providers for regulatory purposes. A modification is considered substantial if the existing GPAI model is changed through retraining, fine-tuning, or other technical adjustments in such a way that the functionality, performance, or risks of the model change significantly, and the modification does not merely amount to integration or use. This means that all obligations that apply to original GPAI developers also apply to modified models. In practice, fine-tuning in the context of business applications must therefore be carefully reviewed from a legal perspective and, if necessary, secured by regulatory measures.

AI System Users

Companies that merely use AI systems—especially in applications with potentially high risks, such as in recruitment, medicine, or critical infrastructure—are also required to maintain a complete inventory of the systems they use. In addition, they must ensure that prohibited applications are not used. Additional obligations will apply to high-risk AI systems beginning August 2026, such as data protection impact assessments and internal monitoring. The more extensive transparency obligations for AI system users—such as AI-generated content labeling—will not become binding until August 2, 2026.

Technical and Organizational Requirements

The AI Act’s implementation requires not only legal but structural measures as well. Companies should consider the following to enhance compliance:

  • Establishing a complete AI inventory with risk classification;
  • Clarifying the company’s role (supplier, modifier, or deployer);
  • Preparing the necessary technical and transparency documentation;
  • Implementing copyright and data protection requirements;
  • Training and verifying AI competence among employees (including external staff); and
  • Adapting internal governance structures, including the appointment of responsible persons.

The Commission and national supervisory authorities have announced that they will closely monitor implementation. Companies should regularly review and adapt their compliance strategies, particularly with regard to the Codes of Practice and future technical standards.

Early Preparation for EU AI Act Compliance and Risk Mitigation

August 2, 2025, is a binding deadline. Taking stock, clarifying roles, and evaluating systems may help create a solid foundation for regulatory certainty. GPAI providers and modifiers in particular should prepare for a higher level of accountability. But traditional deployers are also required to ensure transparency and control of their AI applications.

Early action may mitigate legal and financial risks, as well as underscore responsibility and future viability in dealing with artificial intelligence.

More Insights

Chinese AI Official Advocates for Collaborative Governance to Bridge Development Gaps

An AI official from China emphasized the need for a collaborative and multi-governance ecosystem to promote AI as a public good and bridge the development gap. This call for cooperation highlights the...

Mastering Risk Management in the EU AI Act

The EU AI Act introduces a comprehensive regulation for high-risk AI systems, emphasizing a mandatory Risk Management System (RMS) to proactively manage risks throughout the AI lifecycle. This...

Switzerland’s Approach to AI Regulation: A 2025 Update

Switzerland's National AI Strategy aims to finalize an AI regulatory proposal by 2025, while currently, AI is subject to the Swiss legal framework without specific regulations in place. The Federal...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

The Hidden Dangers of Shadow AI Agents

The article discusses the importance of governance for AI agents, emphasizing that companies must understand and catalogue the AI tools operating within their environments to ensure responsible use...

EU AI Act Compliance: Key Considerations for Businesses Before August 2025

The EU AI Act establishes the world's first comprehensive legal framework for the use and development of artificial intelligence, with key regulations set to take effect in August 2025. Companies must...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...

AI Governance: Bridging the Leadership Gap

As we advance into the era of intelligent machines, organizations are compelled to rethink leadership and oversight due to AI's capacity to make decisions and design strategies. The urgency for...