The EU AI Act: Redefining Public Law in the Digital Age

The Hidden Reach of the EU AI Act: Expanding the Scope of EU Public Power

In June 2024, the European Union adopted the AI Act, marking a groundbreaking initiative as the first comprehensive attempt globally to regulate artificial intelligence (AI). As AI systems become increasingly prevalent across various domains, the implications of the AI Act grow more significant.

At the national level, the experimentation with AI systems in the public domain is predominantly observed in areas such as public administration. Notable examples include:

  • The allocation of public benefits
  • The prevention of crime
  • The evaluation of visa or asylum requests

In these scenarios, authorities may utilize AI-generated risk profiles to support decision-making processes. By mandating Member States to align their laws and practices with the AI Act, the regulation is set to have a significant impact on national administrative procedures.

Broader Implications of the AI Act

This document outlines a largely overlooked impact of the AI Act: its potential to influence the operations of public authorities beyond the direct scope of its provisions. As EU legislation, the AI Act activates the application of second order EU law, which encompasses general principles of Union law and the EU Charter of Fundamental Rights (CFR).

This activation has profound implications, as it may facilitate the Europeanisation of national administrative law, effectively shifting the boundary between EU public law and domestic public law in the realm of digital regulation. This analysis bridges two pertinent debates: the transformative influence of the EU Charter of Fundamental Rights and the impact of the AI Act on national security law.

The “Trigger Function” of the EU AI Act

Due to the absence of a general fundamental rights competence at the EU level, the Charter can only constrain Member State actions within the existing competences. According to Article 51(1) of the CFR, the Charter’s provisions are directed at Member States when they are implementing Union law. The scope of this relationship was clarified in the case of Åkerberg Fransson, where the Court of Justice of the European Union determined that the Charter applies to all Member State activities falling within EU law.

However, merely adopting the AI Act does not automatically encompass all national activities under EU law. A direct relationship between a national activity and Union law is required for the Charter’s applicability. The Court considers whether a specific rule of EU law is applicable, independent, and distinct from the fundamental right itself. Thus, a Member State activity only falls under EU law if the AI Act regulates it significantly.

The AI Act as a “Gateway” for EU Principles of Procedural Justice

The AI Act, as a piece of product safety legislation, primarily establishes procedural obligations to ensure the accuracy, fairness, and legality of AI outputs, without mandating specific substantive requirements. Consequently, when AI systems are employed in administrative contexts, procedural aspects are more likely to fall under EU law compared to substantive aspects.

For instance, a public authority utilizing AI-generated risk profiles to assess the likelihood of student loan fraud must comply with specific provisions of the AI Act. The authority is required to conduct a prior fundamental rights impact assessment and ensure human oversight during AI system deployment. These activities clearly fall within the purview of EU law.

Conversely, the decisions based on the AI-created risk profiles—such as deciding to conduct home inspections—are less clear-cut. While the AI Act influences these decisions, there are limited provisions that directly govern them. Article 86 of the AI Act mandates that affected individuals receive meaningful explanations regarding the role of the AI system in decision-making processes. This procedural requirement aligns with EU principles of good administration, including the duty to reason and the right to be heard.

Significance and Implications

The practical significance of these provisions is challenging to predict, as it largely depends on the existing obligations for public authorities under national law. In cases where EU law mandates a duty to reason that exceeds national law requirements, authorities must comply with EU standards when AI systems are involved. This may also introduce remedies that do not exist within national law.

However, where national laws already meet EU requirements, the substantive impact might be less noticeable. Regardless of the practical implications, the applicability of EU principles of procedural justice, as opposed to national standards, alters the relationship between EU and national public law.

As highlighted by Advocate General Saugmandsgaard Øe, once applicable, the Charter restricts the regulatory and policy options available to Member States, thereby enhancing the EU’s capacity to define the boundaries of permissible actions.

In conclusion, the AI Act not only establishes extensive obligations for public authorities but also triggers the application of EU principles of procedural justice, contributing to the Europeanisation of national administrative law and redefining the relationship between EU public law and domestic public law in the digital regulation landscape.

More Insights

Responsible AI in Finance: From Theory to Practice

The global discussion around artificial intelligence in finance has shifted towards responsible usage, emphasizing the importance of trust, compliance, and education. Startups like WNSTN AI are...

Building Trust in AI Through Certification for a Sustainable Future

The article discusses how certification can enhance trust in AI systems, transforming regulation from a constraint into a competitive advantage in the market. With frameworks like the EU's AI Act...

Trust in Explainable AI: Building Transparency and Accountability

Explainable AI (XAI) is crucial for fostering trust and transparency in critical fields like healthcare and finance, as regulations now require clear explanations of AI decisions. By empowering users...

Regulating AI: Balancing Innovation and Safety

Artificial Intelligence (AI) is a revolutionary technology that presents both immense potential and significant risks, particularly due to the opacity of its algorithms. Without regulation, AI can...

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...