EU AI Act: Key Updates on GPAI Compliance and Literacy Requirements

EU AI Act at the Crossroads: GPAI Rules, AI Literacy Guidance and Potential Delays

The EU AI Act (AI Act), effective since February 2025, introduces a risk-based regulatory framework for AI systems and a parallel regime for general-purpose AI (GPAI) models. It imposes obligations on various actors, including providers, deployers, importers, and manufacturers, requiring organizations to ensure an appropriate level of AI literacy among staff. The AI Act also prohibits “unacceptable risk” AI use cases and imposes rigorous requirements on “high-risk” systems.

As of mid-2025, the implementation landscape is evolving. This update takes stock of where things stand, focusing on: (i) new guidance on the AI literacy obligations for providers and deployers; (ii) the status of the developing a General-Purpose AI Code of Practice and its implications; and (iii) the prospect of delayed enforcement of some of the AI Act’s key provisions.

AI Literacy Requirements

Effective February 2, 2025, Article 4 of the AI Act mandates that providers (entities that develop AI systems for the EU market under their own name) and deployers of AI systems (those using AI systems under their authority) ensure a “sufficient level of AI literacy” among personnel. This requirement applies to all AI systems, not only high-risk AI systems.

On May 7, 2025, the European Commission published detailed FAQs clarifying the scope of this obligation. According to the guidance, AI literacy encompasses not only a general understanding of AI capabilities and limitations but also the ability to assess legal and ethical implications, interpret outputs critically, and apply appropriate oversight. Particularly relevant to all types of businesses, the guidance indicates that organizations using generative AI for business functions (e.g., marketing copy, translations) must ensure that users are trained on associated risks, such as hallucinations.

The obligation extends to all personnel interacting with AI systems, including contractors and service providers. While the AI Act does not prescribe a specific curriculum, the European Commission suggests that AI literacy initiatives should reflect the organization’s role (provider or deployer), the nature and risk profile of the AI systems involved, and the technical competencies of staff. Merely relying on user instructions or passive documentation would generally not be considered sufficient, and bespoke policies, procedures, and training may be required.

Training should be tailored to specific roles and responsibilities and integrated into broader risk management and compliance systems. The guidance further emphasizes that this obligation applies, and will be enforced, proportionally, meaning those deploying high-risk systems are expected to implement more robust literacy programs.

GPAI Models and Code of Practice

The AI Act defines a GPAI model in terms of “significant generality” in its capabilities and competence in “performing a wide range of distinct tasks regardless of the way the model is placed on the market”—capturing large language models like GPT-4, Gemini 2.5 Pro, DeepSeek-VL, etc. On June 12, 2025, the European Commission published a set of FAQs regarding what constitutes a GPAI model and the AI Act’s obligations as they relate to such models.

Article 56 of the AI Act provides for the publication of a General-Purpose AI Code of Practice by the European AI Office, including general guidance for providers of GPAI models (and further guidance for models which present a “systemic risk”) on compliance with the AI Act’s obligations. This GPAI Code, while voluntary, will eventually form the basis on which the European AI Office will assess compliance with the AI Act. The GPAI Code covers transparency and copyright-related rules, which apply to all providers of GPAI models (other than those under an open-source license), as well as specific technical and governance requirements for providers of GPAI models with “systemic risk.”

Second and third drafts of the GPAI Code have since been published by the European Commission. The third draft, published on March 11, 2025, notably removed the use of key performance indicators as a benchmark introduced in the second draft, among various streamlining and reorganization changes. The most recent draft emphasizes the need for the European AI Office to review and update the GPAI Code over time as technology advances. This draft is expected to be the final draft for which feedback can be submitted and will form the basis of the final GPAI Code. However, the finalization of the GPAI Code has been delayed from the initial deadline of May 2, 2025, and is now expected in August 2025, raising industry concerns about regulatory uncertainty and uneven compliance preparation.

Potential Delays in Implementation

While the first provisions of the AI Act came into effect in February 2025, other obligations, such as those placed on providers of GPAI models, do not come into effect until August 2, 2025, under the current timelines, with further implementation arriving in phases until Summer 2027. The recent delay of the GPAI Code until August has led to speculation that certain key provisions of the AI Act might too be delayed. It was reported in May that the European Commission was considering a delay in the enforcement of the GPAI obligations under the AI Act to allow it to “simplify” some of the rules. This motivation tracks with other recent simplification efforts by the European Commission amidst calls from businesses to reduce the regulatory burden of doing business in the EU. While the delay is yet to be confirmed publicly by official EU sources, various influential figures from Member States, including the Swedish Prime Minister, have voiced support for a delay in implementation, in some cases for up to two years.

Key Takeaways

Organizations developing or deploying AI systems within the EU must navigate these evolving requirements carefully. The potential delays in enforcement provide a window to strengthen compliance strategies but also introduce uncertainty. Ensuring AI literacy among staff is now a legal obligation, necessitating the development of tailored training programs. For providers of GPAI models, understanding and preparing for forthcoming obligations is critical, even as final guidelines remain pending.

In-scope organizations should:

  • Enhance AI Literacy: Develop and implement training programs to meet the AI literacy requirements outlined in Article 4.
  • Monitor Regulatory Updates: Stay informed about changes in enforcement timelines and the finalization of the GPAI Code of Practice.
  • Prepare for GPAI Obligations: Even in the absence of finalized guidelines, begin assessing current practices against the anticipated requirements for GPAI models.

For assistance with any of these obligations, organizations should consult with legal experts specializing in AI law and compliance.

More Insights

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft's chief scientist, Dr. Eric Horvitz, has criticized Donald Trump's proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need...

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative...

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into...

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on "high-risk"...

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...