EU AI Act: Final Draft Offers New Guidance for General Purpose AI Compliance

EU AI Act: Latest Developments in Guidance for AI Model Makers

As the deadline approaches for finalizing guidance on the compliance of general-purpose AI (GPAI) models with the provisions of the EU AI Act, a third draft of the Code of Practice has been released. This draft, published on March 11, 2025, is anticipated to be the final iteration before the official guidance is adopted.

Overview of the Code of Practice

The Code of Practice is designed to assist GPAI model makers in understanding their legal obligations and in avoiding sanctions for noncompliance. Notably, penalties for breaches of GPAI requirements can reach up to 3% of a company’s global annual revenue.

This latest revision emphasizes a streamlined structure with refined commitments and measures, reflecting feedback from the second draft published in December 2024. The draft is organized into sections that cover commitments and detailed guidance for transparency, copyright, and safety obligations.

Key Areas of Focus

One of the major areas addressed is transparency. The guidance suggests that GPAIs will need to complete a model documentation form, ensuring that downstream deployers of their technology have access to crucial information for compliance.

Another contentious area is copyright. The current draft utilizes terms like “best efforts” and “reasonable measures,” potentially allowing data-mining AI companies to continue acquiring protected information for model training while mitigating risks of copyright infringement.

Safety and Security Obligations

The EU AI Act imposes safety and security requirements specifically on the most powerful models, identified as those with systemic risk. The latest draft narrows some previously recommended measures to streamline compliance.

Pressure from the U.S.

The ongoing discussions surrounding the EU AI Act have not gone unnoticed by the U.S. administration. Criticism of European lawmaking and AI regulations has emerged, with U.S. officials warning that overregulation could hamper innovation. This backdrop adds pressure for the EU to ease requirements amidst lobbying efforts from American tech firms.

Future Implications

As the final guidance is prepared, the European Commission is simultaneously producing additional clarifying documents to define GPAIs and their responsibilities. Stakeholders are advised to stay tuned for further updates that may shape the operational landscape for AI developers in Europe.

The outcomes of these discussions and the implementation of the Code will likely have profound implications for the future of AI governance, balancing innovation with regulatory compliance.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...