The EU AI Act Newsletter #86: Concerns Around GPT-5 Compliance
Welcome to the EU AI Act Newsletter, a brief biweekly newsletter providing you with up-to-date developments and analyses of the EU artificial intelligence law.
Legislative Process
Commission Consultation on Transparent AI Systems
The European Commission has initiated a consultation to develop guidelines and a Code of Practice for transparent AI systems. This initiative particularly focuses on supporting deployers and providers of generative AI systems to detect and label AI-generated or manipulated content. Under the AI Act, deployers and providers of generative AI must inform users when they are interacting with AI systems, including being exposed to emotion recognition and biometric categorization systems, or when they encounter AI-generated content. The Commission is seeking input from a broad range of stakeholders, including AI providers, deployers, public and private organizations, academics, civil society representatives, supervisory authorities, and citizens. The consultation deadline is October 2, 2025, alongside a simultaneous call for expressions of interest for stakeholders wishing to participate in creating the Code of Practice. These transparency obligations will become effective from August 2, 2026.
German Privacy Watchdogs Upset by Implementation
According to Euractiv’s Maximilian Henning, German data protection authorities have strongly criticized the government’s draft implementation law for the AI Act, arguing it inappropriately diminishes their authority. The AI Act employs a risk-based regulatory framework overseen by designated national authorities. The main concern raised by 17 German state data protection authorities relates to the supervision of AI systems in sensitive areas including law enforcement, border management, justice, and democracy. The draft law assigns oversight responsibilities to the telecommunications regulator (BNetzA), which the data protection authorities argue contradicts the AI Act’s stipulation that data protection authorities should oversee high-risk AI applications in these sensitive domains. Meike Kamp, head of Berlin’s privacy authority, warned that delegating these responsibilities to BNetzA would result in a massive weakening of fundamental rights.
Analyses
ChatGPT May Not Be Following EU’s Rules Yet
Questions have arisen about OpenAI’s compliance with AI Act requirements for its newly released GPT-5 model, particularly regarding training data disclosure obligations. The AI Act requires general-purpose AI developers to publish summaries of their training data, for which the AI Office provided a template in July. While models released before August 2, 2025 have until 2027 to comply, those released after must comply immediately. GPT-5, released on August 7, 2025, appears to lack the required training data summary and copyright policy, despite OpenAI being a signatory to the EU’s Code of Practice. According to Petar Tsankov, CEO of AI compliance company LatticeFlow, the model likely qualifies for the “systemic risk” classification, requiring model evaluations and the management of potential systemic risks. The European Commission indicates that GPT-5’s compliance requirements depend on whether it’s considered a new model under the law, which the AI Office is currently assessing. However, enforcement will not begin until August 2026, giving OpenAI time to address any compliance issues.
The EU AI Office is Facing Hiring Challenges
The AI Office, despite its crucial role in implementing the AI Act, is facing significant staffing challenges. While it has attracted some notable talent and currently employs over 125 staff members, with plans to hire 35 more by the end of the year, key leadership positions remain unfilled. The Office is responsible for over 100 tasks including enforcement of the Code of Practice and levying substantial fines for non-compliance. Recruitment struggles stem from uncompetitive salaries, slow hiring processes, and pressure to ensure representation from member states. Current postings offer between $55,000 and $120,000, which, despite tax benefits, falls far short of private sector compensation where technical staff can earn millions. The staffing shortage has become particularly pressing since the August 2 implementation of general-purpose AI rules. MEP Axel Voss suggests the compliance and safety units alone need 200 staff, significantly more than currently proposed.
The EU is Still Grappling with the Complexities of AI Copyright
The EU’s implementation of the AI Act, including its Code of Practice (CoP), reveals ongoing tensions between copyright law and AI development needs. While transparency and safety requirements are straightforward, copyright issues present significant challenges. Copyright obligations reduce the available data and, through licensing requirements, increase the cost of model training data. A prohibition on reproducing copyright-protected content in model outputs makes sense, as does training on lawfully accessible data. However, managing dataset transparency and handling the increasing number of copyright opt-outs proves problematic. EU regulators face a dilemma: strict copyright enforcement could hamper EU competitiveness in AI development, while immediate legal reform is impractical. The subtle weakening of copyright enforcement in the CoP has attracted most major AI developers as signatories, except Meta and xAI. A more satisfactory policy would require a debate on the role of AI in enhancing learning, research, and innovation, as the current copyright framework—dominated by media industries representing less than 4% of GDP—may impede progress.
European Industry Isn’t Showing Up for Standards Development
A key figure in EU AI standards development has criticized European industry’s lack of participation in creating technical standards crucial for implementing the AI Act. Piercosma Bisconti, who leads the drafting of an “AI trustworthiness framework” for CEN and CENELEC, publicly criticized companies for their absence from the standards-setting process. This criticism was particularly directed at signatories of the “AI champions initiative”, which includes major firms such as Airbus, Siemens, Spotify, and SAP. The AI Act requires detailed technical standards to convert its broad principles into concrete guidelines for AI developers. However, slow progress in standards development has prompted industry and EU governments to request implementation delays. Bisconti specifically criticized companies that have called for “stopping the clock” on the AI Act while simultaneously failing to engage in the standards development process, noting that “EU industry is barely at the table.”