The EU AI Act Newsletter #82: GPAI Code of Practice Goes Live
Created Date: July 22, 2025
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter providing up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
General-Purpose AI Code of Practice published
The European Commission has released a voluntary Code of Practice to help industry comply with AI Act obligations regarding safety, transparency, and copyright for general-purpose AI (GPAI) models. Published on 10 July 2025, the Code was developed by independent experts through a multi-stakeholder process. Member States and the Commission will assess its adequacy in the coming weeks. Once endorsed, AI model providers who voluntarily adopt the Code can demonstrate AI Act compliance while reducing administrative burden and gaining greater legal certainty compared to alternative compliance methods. The Code comprises three separately authored chapters: 1) Transparency, 2) Copyright (applicable to all GPAI model providers under Article 53), and 3) Safety and Security (relevant only to providers of advanced models with systemic risk under Article 55).
AI Office invites GPAI providers to sign the Code of Practice
The EU AI Office is inviting providers of general-purpose AI models to sign the General-Purpose AI Code of Practice. Signatories will be publicly listed on 1 August 2025, one day before the AI Act’s obligations for GPAI providers take effect on 2 August 2025. By signing, providers signal their intent to adhere to the Code and will benefit from streamlined compliance with AI Act obligations. The Commission will focus enforcement on monitoring signatories’ adherence to the code, offering greater predictability and reduced administrative burden.
Commission publishes guidelines for providers of general-purpose AI models
The European Commission has released guidelines to help providers of general-purpose AI models comply with AI Act obligations taking effect on 2 August 2025. The guidelines provide legal certainty across the AI value chain and complement the General-Purpose AI Code of Practice. Executive Vice-President Henna Virkkunen stated that the guidelines support smooth and effective AI Act application, helping AI actors from start-ups to major developers innovate confidently whilst ensuring models remain safe, transparent, and aligned with European values. The guidelines define GPAI models as those trained with computational resources exceeding 1023 floating point operations and capable of generating language, text-to-image, or text-to-video content.
They clarify what constitutes a ‘provider’ and ‘placing on the market’, outline exemptions for models released under free and open-source licenses meeting transparency conditions, and explain implications of adhering to the GPAI Code of Practice. Additionally, they specify obligations for providers of advanced models posing systemic risks to fundamental rights, safety, and potential loss of control, requiring risk assessment and mitigation measures.
Commission launches call for applications to join AI Act Advisory Forum
The European Commission has opened applications for the Advisory Forum under the AI Act, inviting stakeholders from civil society, academia, industry, SMEs, and start-ups to contribute to responsible implementation of the AI regulation. The Advisory Forum will serve as a general advisory body to the Commission, complementing the Scientific Panel which will advise the AI Office and national market surveillance authorities specifically on general-purpose AI. The forum will provide independent technical expertise on a broad range of AI Act issues, including standardization and implementation challenges.
The forum will maintain balanced representation of commercial and non-commercial interests, regional diversity, and gender equality. Permanent members include key EU agencies such as the Fundamental Rights Agency (FRA) and cybersecurity agency ENISA, alongside standardization bodies CEN, CENELEC, and ETSI. Members serve renewable two-year terms and must actively participate in meetings, subgroups, and written contributions without remuneration. Applications close on 14 September 2025 for experts from organizations with proven AI-related track records.
AI Office launches €9 million GPAI safety tender
The EU AI Office has opened a call for tenders worth €9 million to procure technical support for AI Act enforcement, focusing on assessing and monitoring systemic risks from General-Purpose AI models at the EU level. This Digital Europe Programme-funded initiative aims to strengthen the AI Office’s capacity to evaluate and monitor compliance, particularly regarding GPAI systems posing significant risks to public safety, security, and fundamental rights. The tender is divided into six lots: five addressing distinct systemic risks and one providing cross-cutting support.
Lots 1-5 cover CBRN risks (Chemical, Biological, Radiological, and Nuclear), cyber offence risks, loss of control risks, harmful manipulation risks, and sociotechnical risks. Each involves risk modeling and scenario development, adaptation and creation of evaluation tools, technical support for evaluations, and ongoing risk monitoring and analysis. Lot 6 provides cross-cutting support for conducting agentic evaluations, focusing on models’ autonomous behavior in dynamic or open-ended tasks. Interested parties may submit proposals until 25 August 2025.
Analyses
Industry reactions to the Code of Practice
Some major technology companies and associations have responded to the General-Purpose AI Code of Practice. OpenAI announced its intention to sign, contingent on formal approval by the AI Board during the adequacy assessment. Microsoft President Brad Smith indicated that the company will likely sign after they review the documents, welcoming direct AI Office engagement with industry. Domyn has also announced its intention to sign. However, Meta declined to sign, with Global Affairs Chief Joel Kaplan calling it overreach that will stunt growth.
Industry associations expressed concerns about implementation. ITI‘s Marco Leto Barone stated that companies will assess whether the Code offers clear, workable basis for compliance, warning that complex measures beyond the scope of the AI Act would impact legal certainty and could affect industry uptake. CCIA Europe criticized the Code, arguing it still imposes disproportionate burdens on AI providers.
The Code of Practice advances AI safety
Henry Papadatos, Managing Director of SaferAI, argued that the EU’s Code of Practice provides strong incentives for frontier AI developers to adopt measurably safer practices. Companies following the Code gain “presumption of conformity” with AI Act Articles 53 and 55, meaning regulators assume compliance if they meet the Code’s standards. This powerful incentive previously compelled widespread adoption of the EU’s Code of Practice on Disinformation.
The Code mandates comprehensive risk management processes requiring companies to predetermine acceptable risk levels and maintain risks below these thresholds through assessment and mitigation. Compliance must be documented in two key instruments: a Framework (similar to existing Frontier AI Safety Policies) and a Model Report showing application of the Framework for each model (like model cards).
Companies must consider specific risk categories: CBRN, cyber, loss of control, and manipulation, marking significant improvement as no current company frameworks comprehensively address all these areas. The Code represents substantial progress towards safer AI development, meaningfully improving upon current industry practices through explicit risk modeling, operationalized external evaluations, and mandatory public transparency. Its influence will likely extend globally, providing a regulatory blueprint worldwide.