EU AI Act: Final Draft Offers New Guidance for General Purpose AI Compliance

EU AI Act: Latest Developments in Guidance for AI Model Makers

As the deadline approaches for finalizing guidance on the compliance of general-purpose AI (GPAI) models with the provisions of the EU AI Act, a third draft of the Code of Practice has been released. This draft, published on March 11, 2025, is anticipated to be the final iteration before the official guidance is adopted.

Overview of the Code of Practice

The Code of Practice is designed to assist GPAI model makers in understanding their legal obligations and in avoiding sanctions for noncompliance. Notably, penalties for breaches of GPAI requirements can reach up to 3% of a company’s global annual revenue.

This latest revision emphasizes a streamlined structure with refined commitments and measures, reflecting feedback from the second draft published in December 2024. The draft is organized into sections that cover commitments and detailed guidance for transparency, copyright, and safety obligations.

Key Areas of Focus

One of the major areas addressed is transparency. The guidance suggests that GPAIs will need to complete a model documentation form, ensuring that downstream deployers of their technology have access to crucial information for compliance.

Another contentious area is copyright. The current draft utilizes terms like “best efforts” and “reasonable measures,” potentially allowing data-mining AI companies to continue acquiring protected information for model training while mitigating risks of copyright infringement.

Safety and Security Obligations

The EU AI Act imposes safety and security requirements specifically on the most powerful models, identified as those with systemic risk. The latest draft narrows some previously recommended measures to streamline compliance.

Pressure from the U.S.

The ongoing discussions surrounding the EU AI Act have not gone unnoticed by the U.S. administration. Criticism of European lawmaking and AI regulations has emerged, with U.S. officials warning that overregulation could hamper innovation. This backdrop adds pressure for the EU to ease requirements amidst lobbying efforts from American tech firms.

Future Implications

As the final guidance is prepared, the European Commission is simultaneously producing additional clarifying documents to define GPAIs and their responsibilities. Stakeholders are advised to stay tuned for further updates that may shape the operational landscape for AI developers in Europe.

The outcomes of these discussions and the implementation of the Code will likely have profound implications for the future of AI governance, balancing innovation with regulatory compliance.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

Harnessing AI for Effective Risk Management

Artificial intelligence is becoming essential for the risk function, helping chief risk officers (CROs) to navigate compliance and data governance challenges. With a growing number of organizations...

Senate Reverses Course on AI Regulation Moratorium

In a surprising turn, the U.S. Senate voted overwhelmingly to eliminate a provision that would have imposed a federal moratorium on state regulations of artificial intelligence for the next decade...

Bridging the 83% Compliance Gap in Pharmaceutical AI Security

The pharmaceutical industry is facing a significant compliance gap regarding AI data security, with only 17% of companies implementing automated controls to protect sensitive information. This lack of...

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...