Lex Algorithmi, Lex Imperfecta? The Limits of Legal Translation in the EU AI Act
The intersection of technology and law has always been complex, but with the advent of artificial intelligence (AI), particularly algorithmic decision-making systems, the challenges have intensified. These systems are not merely tools; they are sociotechnical entities that process data while embedding social norms, policy assumptions, and sometimes even unintentional bias. As the law attempts to regulate these technologies, it must navigate a landscape filled with values, structures, and power dynamics.
In legal theory, the term translatio iuris refers to the challenge of transforming broad principles into enforceable rules. The EU AI Act serves as a prime example of how difficult this translation can be. While it articulates the desire for “trustworthy AI” and “non-discriminatory systems”, the real challenge lies in defining what these concepts entail in practical terms, such as system audits, algorithmic transparency, and cross-border compliance.
The Translational Gap
As one examines the EU AI Act and similar frameworks, terms like transparency, fairness, accountability, and non-manipulation surface frequently. However, the ambiguity arises when these high-level ideals need to be operationalized. For instance, what constitutes “meaningful information”? Is it meaningful to a data scientist, a consumer, or a regulator?
The concept of translatio iuris becomes crucial here, as high-level ethical ideals must be transformed into operational, technical, and legally enforceable mechanisms. AI systems often structure decisions rather than merely executing them, which complicates the legal landscape even further.
From Ex Ante to Ex Post
Legal regulation often employs a mix of ex ante (preventive) and ex post (reactive) measures. The EU AI Act adheres to this traditional regulatory approach, emphasizing the need for both preventive safeguards and corrective mechanisms.
A. Ex Ante Requirements
Ex ante requirements aim to anticipate harm before it manifests. This includes:
- Risk classification under Title III of the Act;
- Data governance (Article 10);
- Transparency obligations (Article 13);
- Human oversight mechanisms (Article 14);
- Conformity assessments and CE marking (Articles 19–20).
These obligations act as filters, ensuring that only systems meeting predefined thresholds enter the market, reflecting the principle of lex specialis—specific rules that take precedence over general ones in high-risk contexts.
B. Ex Post Mechanisms
Once an AI system is operational, ex post mechanisms come into play, including audits and performance monitoring. These mechanisms are designed to:
- Detect harms or legal violations missed during development;
- Allow for redress and correction (e.g., enforcement powers under Article 71);
- Update risk classifications based on actual use.
However, as highlighted by scholars of algorithmic accountability, ex post regulation faces challenges when decision-making is non-linear or probabilistic. Identifying accountability becomes a complex issue, especially when multiple layers of decision-making are involved.
Between Lex Ferenda and Lex Lata: Implementing the AI Act
The EU AI Act attempts to regulate sociotechnical systems using legal frameworks not originally designed for these technologies. The challenge lies in implementing mechanisms such as ex-ante assessments, documentation, and post-deployment audits.
A. Translating High-Level Principles into Practice
Each regulatory tool aims to convert ideals such as non-discrimination and explainability into trackable obligations. However, ambiguity arises when determining what constitutes “sufficient accuracy” or a “meaningful” explanation.
B. Jurisdiction, Sovereignty, and Legal Hierarchies
The EU AI Act does not replace existing frameworks but coexists with them, such as the GDPR and various consumer protection laws. This interplay raises questions regarding which framework takes precedence in cases of conflicting obligations.
C. The Role of Audits and Conformity Assessments
The Act’s attempt to embed preventive and corrective tools through structured evaluations is noteworthy. It includes:
- Pre-market conformity assessments for high-risk systems;
- Post-market monitoring obligations;
- The option for third-party evaluations.
However, the effectiveness of these mechanisms is contingent upon their implementation and interpretability.
Enforcement: Between Ius Scriptum and Ius Non Scriptum
The aspiration for enforceability within the EU AI Act is complicated by practical challenges. The distinction between ius scriptum (written law) and ius non scriptum (unwritten law shaped by norms) becomes evident in enforcement dynamics.
A. Capacity and Coordination Problems
The Act establishes various layers of institutional responsibility, including the creation of the European Artificial Intelligence Office (EAIO) and market surveillance authorities. However, effective implementation hinges on the cooperation of these entities, which may be hindered by existing regulatory burdens.
B. The “Black Box” Problem Revisited
The challenge of enforcing transparency and explainability is particularly pronounced due to the inherent complexity of AI systems. The law mandates that explanations be “meaningful,” but the interpretation of this term can vary significantly among stakeholders.
C. Sanctions and Deterrence
While the Act outlines penalties for non-compliance, the effectiveness of these sanctions in deterring misconduct remains to be seen. Larger firms might view fines as mere operational costs, while smaller entities may struggle to fully comprehend their obligations.
D. The Risk of Formalism Without Function
There exists a risk of compliance formalism, where entities adhere to the letter of the law but fail to capture its spirit. The AI Act’s provisions for oversight and monitoring aim to mitigate this risk, but the success of these measures will depend on their actual implementation.
Governing in the Age of Algorithmic Drift
As lawmakers strive to regulate rapidly evolving technologies, the challenge is to maintain the intelligibility and authority of legal systems in a landscape characterized by fluidity and abstraction. This necessitates a shift from traditional categories of law to a more iterative legality, where legal frameworks must adapt over time.
A. From Rules to Reflexivity
The ultimate lesson may be the need for reflexive governance, where laws are not just enforceable but also contestable. This includes mechanisms that facilitate public oversight and community engagement in shaping norms.
As the EU AI Act unfolds, the interplay between regulation and technological advancement will shape not only the future of AI but also the very fabric of legal imagination.