A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps
On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a code designed to help industry comply with the AI Act’s rules. The Code has been under development since October 2024, following a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025; however, this delay has sparked widespread speculation ranging from concerns about industry lobbying to deeper ideological tensions between proponents of innovation and regulation.
Beyond these narratives, a fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act, particularly in its approach to general-purpose AI (GPAI). The current version, which includes three chapters covering Transparency, Copyright, and Safety and Security, does not address these core problems.
The Legal Invention of General-Purpose AI
According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is defined as an AI model capable of performing a wide range of distinct tasks, regardless of the way the model is placed on the market. This term aligns with what the AI research community refers to as a foundation model—large-scale models trained on broad datasets to support multiple tasks. Examples include OpenAI’s ChatGPT series and Google’s Gemini.
However, the term “general-purpose AI” did not originate within the AI research community but is a legal construct introduced by the EU AI Act. This attempt to impose legal clarity onto an evolving domain creates a false sense of certainty and stability, suggesting that AI systems can be easily classified and understood.
The Limits of a Risk-Based Framework
The EU AI Act employs a risk-based regulatory approach, defining risk as the combination of the probability of harm and its severity. This traditional view assumes harms are foreseeable, yet AI—especially foundation models—complicates this framework. The features of foundation models are difficult to quantify, complicating traditional risk assessment.
This creates a legal and epistemic tension: law requires certainty, yet AI challenges this prerequisite. The Act’s treatment of systemic risk reflects influences from AI Safety discourse, yet it lacks engagement with fundamental concepts such as alignment and control. By framing systemic risk as a technical property, the Act overlooks the critical role of deployment contexts and institutional oversight in shaping real-world harms.
The Bureaucratic Trap of Legal Certainty
Max Weber’s analysis of bureaucracy illustrates the mismatch between legal assumptions and technological realities. Bureaucracy relies on clear categorization, which the EU AI Act exemplifies through precise definitions. However, this legal formalism may hinder adaptive governance, locking Europe into outdated frameworks as AI research advances.
Thomas Kuhn’s theory of scientific revolutions further explains this phenomenon. Kuhn described “normal science” as operating within established paradigms, with shifts occurring when anomalies accumulate. The current state of AI research is disrupting existing paradigms, yet legal systems tend to lag behind.
The risk of legislating yesterday’s paradigms into tomorrow’s world is significant. Instead of anchoring regulation in fixed categories, policymakers should adopt governance mechanisms that anticipate conceptual change and allow for iterative revisions. This requires a shift from static definitions to a framework accommodating AI’s evolving nature.
Anticipatory Governance of Emerging Technologies
The OECD’s work on anticipatory innovation governance illustrates how frameworks can prepare for multiple possible futures. Such governance can be embedded into core policymaking processes, contrasting sharply with the EU AI Act’s reliance on fixed categories. This approach emphasizes flexibility and iterative review, essential for effective governance in the rapidly changing AI landscape.
The delay in releasing the GPAI Code of Practice should not be viewed as a moment of conflict but as an opportunity to consider a more suitable governance framework—one that embraces uncertainty and adapts to change rather than imposing rigid definitions.