New Obligations for General-Purpose AI Under the EU AI Act

EU AI Act Implementation: New Obligations for General-Purpose AI Systems Take Effect

The European Union’s Artificial Intelligence Act (AI Act) marks a significant milestone as the world’s first comprehensive legal framework for artificial intelligence (AI). This landmark legislation aims to address the inherent risks associated with AI technology while fostering trustworthy innovation across Europe. It establishes risk-based regulations that affect companies developing, deploying, or utilizing AI within the EU, regardless of their geographical base. The AI Act is being implemented progressively, with the second key deadline recently reached.

Stage 1: Prohibited AI Systems and Staff Training

As of February 2, 2025, certain AI systems deemed to pose unacceptable risks have been prohibited from entering the EU market, being put into service, or utilized within the EU. In addition to prohibitions, providers and deployers of AI systems are required to demonstrate AI literacy among their staff, ensuring a baseline understanding of AI implications and functionalities.

Stage 2: General Purpose AI (GPAI) Models

Starting August 2, 2025, obligations concerning General Purpose AI (GPAI) models come into effect for AI models placed on the market on or after this date. Models that were already available before this deadline will have until August 2, 2027, to comply with the AI Act’s stipulations. A GPAI model is defined as an AI model trained with extensive data (exceeding 10²³ floating point operations per second (FLOPS)) capable of generating language (text/audio), text-to-image, or text-to-video outputs, and exhibiting significant functional generality—meaning it can perform a wide range of distinct tasks applicable in various downstream systems or applications.

Examples of GPAI models include large language models like GPT-4, Google Gemini, and Llama 3.1. AI systems, such as social media chatbots like Grok, are built on these general-purpose AI models. The AI Act differentiates between GPAI models presenting systemic risk—particularly those exceeding the 10²⁵ FLOPS computational threshold—and those that do not.

Organizations that deploy or develop GPAI models must classify their models and comply with obligations that range from requirements for technical documentation to transparency measures. For instance, a mandatory public summary of training materials is required for GPAI models, along with cooperation with competent authorities regarding non-systemic risk models. Additional requirements are imposed on those models identified as presenting systemic risks, including incident notification, model evaluation, systemic risk assessment and mitigation, and cybersecurity measures.

The European Commission is establishing a new supervisory body, the AI Office, that will oversee all GPAI models. On July 18, 2025, draft guidelines were published to clarify key provisions of the AI Act as they pertain to GPAI models. These guidelines aim to assist organizations in their compliance efforts, providing insights into the Commission’s interpretation of relevant provisions.

Furthermore, a General-Purpose AI Code of Practice has been developed by independent experts, serving as a voluntary tool for GPAI model providers to demonstrate their compliance with the AI Act. This Code of Practice emphasizes transparency, copyright, as well as the safety and security of GPAI models. Providers are encouraged to sign the Code, thereby committing to its adherence.

Remaining Provisions and Future Compliance

Other general sections of the AI Act came into effect on August 2, 2025, including scope and definitions, frameworks regarding competent authorities, enforcement mechanisms, and confidentiality obligations. The remaining provisions of the AI Act will begin applying from August 2, 2026, except for Article 6 (1).

As the broader provisions of the AI Act approach their application date, organizations must prioritize two critical preparatory actions: comprehensive AI system mapping and precise role definition under the AI Act’s framework. The differentiation between roles such as AI system providers, deployers, distributors, and importers carries significant compliance implications, as each role triggers unique obligations and governance requirements.

Given the global reach and technical complexity of the AI Act, organizations are advised to conduct thorough assessments of their systems to identify those categorized as high-risk AI, GPAI models, or AI systems subject to transparency obligations. Early preparation in documentation, transparency, and copyright compliance is crucial to mitigate enforcement risks and avoid business disruptions.

More Insights

Responsible AI Principles for .NET Developers

In the era of Artificial Intelligence, trust in AI systems is crucial, especially in sensitive fields like banking and healthcare. This guide outlines Microsoft's six principles of Responsible...

EU AI Act Copyright Compliance Guidelines Unveiled

The EU AI Office has released a more workable draft of the Code of Practice for general-purpose model providers under the EU AI Act, which must be finalized by May 2. This draft outlines compliance...

Building Trust in the Age of AI: Compliance and Customer Confidence

Artificial intelligence holds great potential for marketers, provided it is supported by responsibly collected quality data. A recent panel discussion at the MarTech Conference emphasized the...

AI Transforming Risk and Compliance in Banking

In today's banking landscape, AI has become essential for managing risk and compliance, particularly in India, where regulatory demands are evolving rapidly. Financial institutions must integrate AI...

California’s Landmark AI Transparency Law: A New Era for Frontier Models

California lawmakers have passed a landmark AI transparency law, the Transparency in Frontier Artificial Intelligence Act (SB 53), aimed at enhancing accountability and public trust in advanced AI...

Ireland Establishes National AI Office to Oversee EU Act Implementation

The Government has designated 15 competent authorities under the EU's AI Act and plans to establish a National AI Office by August 2, 2026, to serve as the central coordinating authority in Ireland...

AI Recruitment Challenges and Legal Compliance

The increasing use of AI applications in recruitment offers efficiency benefits but also presents significant legal challenges, particularly under the EU AI Act and GDPR. Employers must ensure that AI...

Building Robust Guardrails for Responsible AI Implementation

As generative AI transforms business operations, deploying AI systems without proper guardrails is akin to driving a Formula 1 car without brakes. To successfully implement AI solutions, organizations...

Inclusive AI for Emerging Markets

Artificial Intelligence is transforming emerging markets, offering opportunities in education, healthcare, and financial inclusion, but also risks widening the digital divide. To ensure equitable...