New Obligations for General-Purpose AI Under the EU AI Act

EU AI Act Implementation: New Obligations for General-Purpose AI Systems Take Effect

The European Union’s Artificial Intelligence Act (AI Act) marks a significant milestone as the world’s first comprehensive legal framework for artificial intelligence (AI). This landmark legislation aims to address the inherent risks associated with AI technology while fostering trustworthy innovation across Europe. It establishes risk-based regulations that affect companies developing, deploying, or utilizing AI within the EU, regardless of their geographical base. The AI Act is being implemented progressively, with the second key deadline recently reached.

Stage 1: Prohibited AI Systems and Staff Training

As of February 2, 2025, certain AI systems deemed to pose unacceptable risks have been prohibited from entering the EU market, being put into service, or utilized within the EU. In addition to prohibitions, providers and deployers of AI systems are required to demonstrate AI literacy among their staff, ensuring a baseline understanding of AI implications and functionalities.

Stage 2: General Purpose AI (GPAI) Models

Starting August 2, 2025, obligations concerning General Purpose AI (GPAI) models come into effect for AI models placed on the market on or after this date. Models that were already available before this deadline will have until August 2, 2027, to comply with the AI Act’s stipulations. A GPAI model is defined as an AI model trained with extensive data (exceeding 10²³ floating point operations per second (FLOPS)) capable of generating language (text/audio), text-to-image, or text-to-video outputs, and exhibiting significant functional generality—meaning it can perform a wide range of distinct tasks applicable in various downstream systems or applications.

Examples of GPAI models include large language models like GPT-4, Google Gemini, and Llama 3.1. AI systems, such as social media chatbots like Grok, are built on these general-purpose AI models. The AI Act differentiates between GPAI models presenting systemic risk—particularly those exceeding the 10²⁵ FLOPS computational threshold—and those that do not.

Organizations that deploy or develop GPAI models must classify their models and comply with obligations that range from requirements for technical documentation to transparency measures. For instance, a mandatory public summary of training materials is required for GPAI models, along with cooperation with competent authorities regarding non-systemic risk models. Additional requirements are imposed on those models identified as presenting systemic risks, including incident notification, model evaluation, systemic risk assessment and mitigation, and cybersecurity measures.

The European Commission is establishing a new supervisory body, the AI Office, that will oversee all GPAI models. On July 18, 2025, draft guidelines were published to clarify key provisions of the AI Act as they pertain to GPAI models. These guidelines aim to assist organizations in their compliance efforts, providing insights into the Commission’s interpretation of relevant provisions.

Furthermore, a General-Purpose AI Code of Practice has been developed by independent experts, serving as a voluntary tool for GPAI model providers to demonstrate their compliance with the AI Act. This Code of Practice emphasizes transparency, copyright, as well as the safety and security of GPAI models. Providers are encouraged to sign the Code, thereby committing to its adherence.

Remaining Provisions and Future Compliance

Other general sections of the AI Act came into effect on August 2, 2025, including scope and definitions, frameworks regarding competent authorities, enforcement mechanisms, and confidentiality obligations. The remaining provisions of the AI Act will begin applying from August 2, 2026, except for Article 6 (1).

As the broader provisions of the AI Act approach their application date, organizations must prioritize two critical preparatory actions: comprehensive AI system mapping and precise role definition under the AI Act’s framework. The differentiation between roles such as AI system providers, deployers, distributors, and importers carries significant compliance implications, as each role triggers unique obligations and governance requirements.

Given the global reach and technical complexity of the AI Act, organizations are advised to conduct thorough assessments of their systems to identify those categorized as high-risk AI, GPAI models, or AI systems subject to transparency obligations. Early preparation in documentation, transparency, and copyright compliance is crucial to mitigate enforcement risks and avoid business disruptions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...