EU AI Act Compliance: Key Considerations for Businesses Before August 2025

EU AI Act: Key Compliance Considerations Ahead of August 2025

The European Commission has made it clear: the timetable for implementing the Artificial Intelligence Act (EU AI Act) remains unchanged. There are no plans for transition periods or postponements. The first regulations have been in force since February 2, 2025, while further key obligations will become binding on August 2, 2025. Violations of the AI Act may be punished with significant penalties, including fines of up to EUR 35 million or 7% of global annual turnover.

The AI Act marks the world’s first comprehensive legal framework for using and developing AI. It follows a risk-based approach that links regulatory requirements to the specific risk an AI system entails. Implementation may pose structural, technical, and governance-related challenges for companies, particularly in the area of general-purpose AI (GPAI).

Key Requirements and Compliance Obligations Under the EU AI Act

The AI Act focuses on risky and prohibited AI practices. Certain applications have been expressly prohibited since February 2, 2025. These include, among others:

  • Biometric categorization based on sensitive characteristics;
  • Emotion recognition systems in the workplace;
  • Manipulative systems that influence human behavior without being noticed;
  • Social scoring.

These prohibitions apply comprehensively—both to the development and to the mere use of such systems.

On August 2, 2025, comprehensive due diligence, transparency, and documentation requirements will also take effect for various actors along the AI value chain.

The German legislature is expected to entrust the Federal Network Agency (Bundesnetzagentur) with regulatory oversight. The agency has already set up a central point of contact, the AI Service Desk, to serve as a first point of contact for small and medium-sized enterprises, particularly for questions relating to the AI Act’s practical implementation. Companies should closely monitor regulatory developments, for example regarding the final Code of Practice for GPAI models, which was published on July 10, and the harmonization of technical standards, which may become the “best practice” benchmark for compliance.

Which Companies and Stakeholders Are Impacted by the EU AI Act?

General-Purpose AI (GPAI) Providers

Providers of GPAI models—such as large language or multimodal models—will be subject to a specific regulatory regime beginning August 2025. They will be required to maintain technical documentation that makes the model’s development, training, and evaluation traceable. In addition, transparency reports must be prepared that describe the capabilities, limitations, potential risks, and guidance for integrators.

A summary of the training data used must also be published. This must include data types, sources, and preprocessing methods. The use of copyright-protected content must be documented and legally permissible. At the same time, providers must ensure the protection of confidential information.

GPAIs with Systemic Risk

Extended obligations apply to particularly powerful GPAI models that are classified as “systemic.” Classification is based on technical criteria such as computing power, range, or potential impact. Providers of such models must report the system to the European Commission, undergo structured evaluation and testing procedures, and permanently document security incidents. In addition, increased requirements apply in the area of cybersecurity and monitoring.

Downstream Providers and Modifiers

Companies that substantially modify existing GPAI models will themselves become providers for regulatory purposes. A modification is considered substantial if the existing GPAI model is changed through retraining, fine-tuning, or other technical adjustments in such a way that the functionality, performance, or risks of the model change significantly, and the modification does not merely amount to integration or use. This means that all obligations that apply to original GPAI developers also apply to modified models. In practice, fine-tuning in the context of business applications must therefore be carefully reviewed from a legal perspective and, if necessary, secured by regulatory measures.

AI System Users

Companies that merely use AI systems—especially in applications with potentially high risks, such as in recruitment, medicine, or critical infrastructure—are also required to maintain a complete inventory of the systems they use. In addition, they must ensure that prohibited applications are not used. Additional obligations will apply to high-risk AI systems beginning August 2026, such as data protection impact assessments and internal monitoring. The more extensive transparency obligations for AI system users—such as AI-generated content labeling—will not become binding until August 2, 2026.

Technical and Organizational Requirements

The AI Act’s implementation requires not only legal but structural measures as well. Companies should consider the following to enhance compliance:

  • Establishing a complete AI inventory with risk classification;
  • Clarifying the company’s role (supplier, modifier, or deployer);
  • Preparing the necessary technical and transparency documentation;
  • Implementing copyright and data protection requirements;
  • Training and verifying AI competence among employees (including external staff); and
  • Adapting internal governance structures, including the appointment of responsible persons.

The Commission and national supervisory authorities have announced that they will closely monitor implementation. Companies should regularly review and adapt their compliance strategies, particularly with regard to the Codes of Practice and future technical standards.

Early Preparation for EU AI Act Compliance and Risk Mitigation

August 2, 2025, is a binding deadline. Taking stock, clarifying roles, and evaluating systems may help create a solid foundation for regulatory certainty. GPAI providers and modifiers in particular should prepare for a higher level of accountability. But traditional deployers are also required to ensure transparency and control of their AI applications.

Early action may mitigate legal and financial risks, as well as underscore responsibility and future viability in dealing with artificial intelligence.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...