EU AI Act: Final Draft Offers New Guidance for General Purpose AI Compliance

EU AI Act: Latest Developments in Guidance for AI Model Makers

As the deadline approaches for finalizing guidance on the compliance of general-purpose AI (GPAI) models with the provisions of the EU AI Act, a third draft of the Code of Practice has been released. This draft, published on March 11, 2025, is anticipated to be the final iteration before the official guidance is adopted.

Overview of the Code of Practice

The Code of Practice is designed to assist GPAI model makers in understanding their legal obligations and in avoiding sanctions for noncompliance. Notably, penalties for breaches of GPAI requirements can reach up to 3% of a company’s global annual revenue.

This latest revision emphasizes a streamlined structure with refined commitments and measures, reflecting feedback from the second draft published in December 2024. The draft is organized into sections that cover commitments and detailed guidance for transparency, copyright, and safety obligations.

Key Areas of Focus

One of the major areas addressed is transparency. The guidance suggests that GPAIs will need to complete a model documentation form, ensuring that downstream deployers of their technology have access to crucial information for compliance.

Another contentious area is copyright. The current draft utilizes terms like “best efforts” and “reasonable measures,” potentially allowing data-mining AI companies to continue acquiring protected information for model training while mitigating risks of copyright infringement.

Safety and Security Obligations

The EU AI Act imposes safety and security requirements specifically on the most powerful models, identified as those with systemic risk. The latest draft narrows some previously recommended measures to streamline compliance.

Pressure from the U.S.

The ongoing discussions surrounding the EU AI Act have not gone unnoticed by the U.S. administration. Criticism of European lawmaking and AI regulations has emerged, with U.S. officials warning that overregulation could hamper innovation. This backdrop adds pressure for the EU to ease requirements amidst lobbying efforts from American tech firms.

Future Implications

As the final guidance is prepared, the European Commission is simultaneously producing additional clarifying documents to define GPAIs and their responsibilities. Stakeholders are advised to stay tuned for further updates that may shape the operational landscape for AI developers in Europe.

The outcomes of these discussions and the implementation of the Code will likely have profound implications for the future of AI governance, balancing innovation with regulatory compliance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...