False Confidence in the EU AI Act: Understanding the Epistemic Gaps

A False Confidence in the EU AI Act: Epistemic Gaps and Bureaucratic Traps

On July 10, 2025, the European Commission released the final draft of the General-Purpose Artificial Intelligence (GPAI) Code of Practice, a code designed to help industry comply with the AI Act’s rules. The Code has been under development since October 2024, following a kick-off plenary in September 2024. The Commission had planned to release the final draft by May 2, 2025; however, this delay has sparked widespread speculation ranging from concerns about industry lobbying to deeper ideological tensions between proponents of innovation and regulation.

Beyond these narratives, a fundamental issue emerges: an epistemic and conceptual disconnect at the core of the EU Artificial Intelligence Act, particularly in its approach to general-purpose AI (GPAI). The current version, which includes three chapters covering Transparency, Copyright, and Safety and Security, does not address these core problems.

The Legal Invention of General-Purpose AI

According to Art. 3(63) of the EU AI Act, a “general-purpose AI model” is defined as an AI model capable of performing a wide range of distinct tasks, regardless of the way the model is placed on the market. This term aligns with what the AI research community refers to as a foundation model—large-scale models trained on broad datasets to support multiple tasks. Examples include OpenAI’s ChatGPT series and Google’s Gemini.

However, the term “general-purpose AI” did not originate within the AI research community but is a legal construct introduced by the EU AI Act. This attempt to impose legal clarity onto an evolving domain creates a false sense of certainty and stability, suggesting that AI systems can be easily classified and understood.

The Limits of a Risk-Based Framework

The EU AI Act employs a risk-based regulatory approach, defining risk as the combination of the probability of harm and its severity. This traditional view assumes harms are foreseeable, yet AI—especially foundation models—complicates this framework. The features of foundation models are difficult to quantify, complicating traditional risk assessment.

This creates a legal and epistemic tension: law requires certainty, yet AI challenges this prerequisite. The Act’s treatment of systemic risk reflects influences from AI Safety discourse, yet it lacks engagement with fundamental concepts such as alignment and control. By framing systemic risk as a technical property, the Act overlooks the critical role of deployment contexts and institutional oversight in shaping real-world harms.

The Bureaucratic Trap of Legal Certainty

Max Weber’s analysis of bureaucracy illustrates the mismatch between legal assumptions and technological realities. Bureaucracy relies on clear categorization, which the EU AI Act exemplifies through precise definitions. However, this legal formalism may hinder adaptive governance, locking Europe into outdated frameworks as AI research advances.

Thomas Kuhn’s theory of scientific revolutions further explains this phenomenon. Kuhn described “normal science” as operating within established paradigms, with shifts occurring when anomalies accumulate. The current state of AI research is disrupting existing paradigms, yet legal systems tend to lag behind.

The risk of legislating yesterday’s paradigms into tomorrow’s world is significant. Instead of anchoring regulation in fixed categories, policymakers should adopt governance mechanisms that anticipate conceptual change and allow for iterative revisions. This requires a shift from static definitions to a framework accommodating AI’s evolving nature.

Anticipatory Governance of Emerging Technologies

The OECD’s work on anticipatory innovation governance illustrates how frameworks can prepare for multiple possible futures. Such governance can be embedded into core policymaking processes, contrasting sharply with the EU AI Act’s reliance on fixed categories. This approach emphasizes flexibility and iterative review, essential for effective governance in the rapidly changing AI landscape.

The delay in releasing the GPAI Code of Practice should not be viewed as a moment of conflict but as an opportunity to consider a more suitable governance framework—one that embraces uncertainty and adapts to change rather than imposing rigid definitions.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...