False Confidence in the EU AI Act: Understanding the Epistemic Gaps

A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps

On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a code designed to help industry comply with the AI Act’s rules. The Code has been under development since October 2024, following a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025; however, this delay has sparked widespread speculation ranging from concerns about industry lobbying to deeper ideological tensions between proponents of innovation and regulation.

Beyond these narratives, a fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act, particularly in its approach to general-purpose AI (GPAI). The current version, which includes three chapters covering Transparency, Copyright, and Safety and Security, does not address these core problems.

The Legal Invention of General-Purpose AI

According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is defined as an AI model capable of performing a wide range of distinct tasks, regardless of the way the model is placed on the market. This term aligns with what the AI research community refers to as a foundation model—large-scale models trained on broad datasets to support multiple tasks. Examples include OpenAI’s ChatGPT series and Google’s Gemini.

However, the term “general-purpose AI” did not originate within the AI research community but is a legal construct introduced by the EU AI Act. This attempt to impose legal clarity onto an evolving domain creates a false sense of certainty and stability, suggesting that AI systems can be easily classified and understood.

The Limits of a Risk-Based Framework

The EU AI Act employs a risk-based regulatory approach, defining risk as the combination of the probability of harm and its severity. This traditional view assumes harms are foreseeable, yet AI—especially foundation models—complicates this framework. The features of foundation models are difficult to quantify, complicating traditional risk assessment.

This creates a legal and epistemic tension: law requires certainty, yet AI challenges this prerequisite. The Act’s treatment of systemic risk reflects influences from AI Safety discourse, yet it lacks engagement with fundamental concepts such as alignment and control. By framing systemic risk as a technical property, the Act overlooks the critical role of deployment contexts and institutional oversight in shaping real-world harms.

The Bureaucratic Trap of Legal Certainty

Max Weber’s analysis of bureaucracy illustrates the mismatch between legal assumptions and technological realities. Bureaucracy relies on clear categorization, which the EU AI Act exemplifies through precise definitions. However, this legal formalism may hinder adaptive governance, locking Europe into outdated frameworks as AI research advances.

Thomas Kuhn’s theory of scientific revolutions further explains this phenomenon. Kuhn described “normal science” as operating within established paradigms, with shifts occurring when anomalies accumulate. The current state of AI research is disrupting existing paradigms, yet legal systems tend to lag behind.

The risk of legislating yesterday’s paradigms into tomorrow’s world is significant. Instead of anchoring regulation in fixed categories, policymakers should adopt governance mechanisms that anticipate conceptual change and allow for iterative revisions. This requires a shift from static definitions to a framework accommodating AI’s evolving nature.

Anticipatory Governance of Emerging Technologies

The OECD’s work on anticipatory innovation governance illustrates how frameworks can prepare for multiple possible futures. Such governance can be embedded into core policymaking processes, contrasting sharply with the EU AI Act’s reliance on fixed categories. This approach emphasizes flexibility and iterative review, essential for effective governance in the rapidly changing AI landscape.

The delay in releasing the GPAI Code of Practice should not be viewed as a moment of conflict but as an opportunity to consider a more suitable governance framework—one that embraces uncertainty and adapts to change rather than imposing rigid definitions.

More Insights

AI in Finland’s Government: Compliance and Opportunities for 2025

Finland's government is preparing for the implementation of the EU AI Act, which mandates compliance with general-purpose AI obligations starting August 2, 2025. This guide outlines the legal and...

AI Governance in East Asia: Strategies from South Korea, Japan, and Taiwan

As AI becomes a defining force in global innovation, South Korea, Japan, and Taiwan are establishing distinct regulatory frameworks to oversee its use, each aiming for more innovation-friendly...

Ensuring Ethical Compliance in AI-Driven Insurance

As insurance companies increasingly integrate AI into their processes, they face regulatory scrutiny and ethical challenges that necessitate transparency and fairness. New regulations aim to minimize...

False Confidence in the EU AI Act: Understanding the Epistemic Gaps

The European Commission's final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice has sparked discussions about its implications for AI regulation, revealing an epistemic...

Transforming AI Governance: The EU Act’s Framework Against Super AI Risks

The EU AI Act establishes a risk-based framework that categorizes AI systems based on their potential harm, imposing strict regulations on high-risk and prohibited uses to enhance human oversight and...

EU AI Act: Key Changes and Future Implications

The EU AI Act reached a significant milestone on August 2, 2025, marking the beginning of real obligations for general-purpose AI models. Providers must now meet specific requirements to enter the EU...

AI Copyright Dilemma in the EU

The European Union's implementation of the Artificial Intelligence Act introduces new guidelines that aim to balance AI growth with copyright compliance, but this creates significant challenges for...

EU AI Act: Key Compliance Dates and Implications for Medtech

The EU AI Act has come into effect, imposing compliance requirements for AI systems, especially high-risk ones, with penalties starting as of August 2, 2025. Companies must prepare for full...

China’s AI Content Labeling: Key Compliance Insights for Businesses

China has implemented new AI labeling rules that require clear identification of AI-generated content across various media when distributed on Chinese platforms. Companies must adapt their content...