AI Regulation and Its Impact on Arbitration Enforcement in the EU

AI in Arbitration: Will the EU AI Act Stand in the Way of Enforcement?

The European Union has taken an unprecedented step by regulating artificial intelligence (AI) through the EU AI Act, which is the world’s first comprehensive legal framework for AI governance. According to Recital 61, Article 6(2), and Annex III, 8(a), AI tools used in legal or administrative decision-making processes—including alternative dispute resolution (ADR), which functions similarly to courts and produces legal effects—are categorized as high risk. These tools must comply with the strict requirements outlined in Articles 8 through 27.

These provisions aim to ensure transparency, accountability, and respect for fundamental rights. This obligation will take effect on August 2, 2026, as stated in Article 113. Notably, the Act’s extraterritorial scope, as detailed in Articles 2(1)(c) and (g), applies to any AI system that affects individuals within the European Union, regardless of where the system is developed or utilized. This includes providers and deployers outside the EU whose outputs are used within the Union. This raises a critical question: can non-compliance with the EU AI Act serve as a basis for courts in EU Member States to refuse recognition or enforcement of an arbitral award on procedural or public policy grounds?

Consider the following scenario: Two EU-based technology companies, one Belgian and one German, agree to resolve their disputes through US-seated arbitration. If the ADR center utilizes AI-powered tools that do not comply with the EU AI Act’s high-risk system requirements, how would enforcement of the resulting award unfold before national courts in the EU?

This scenario presents a direct legal conflict. If the winning party seeks to enforce the award in a national court of an EU Member State, two well-established legal grounds for refusing enforcement may arise. First, the losing party may invoke Article V(1)(d) of the 1958 New York Convention, alongside the applicable national arbitration law, arguing that reliance on AI systems that do not comply with the EU AI Act constitutes a procedural irregularity, as it deviates from the agreed arbitration procedure and undermines the integrity of the arbitral process. Second, under Article V(2)(b) of the Convention, the enforcing court may refuse recognition on its own initiative if it determines that using non-compliant AI violates the forum’s public policy, particularly when fundamental rights or procedural fairness are at stake.

Scenario 1: Procedural Irregularity under Article V(1)

Imagine that the ADR center employs an AI tool to assist the tribunal in drafting the award during the proceedings. This AI system utilizes complex algorithms that cannot produce transparent, human-readable explanations of how key conclusions were reached. The final award relies on these outputs, yet lacks meaningful reasoning or justification for several significant findings. Furthermore, the tribunal does not disclose the extent of its reliance on the AI system, nor is there clear evidence of human oversight in the deliberation process.

When the losing party in Belgium contests enforcement of the award, they invoke Article V(1)(d) of the New York Convention, arguing that the arbitral procedure did not align with the parties’ expectations or the applicable law. This objection is also found in Article 1721 of the Belgian Judicial Code (BJC), inspired by Article 36 of the UNCITRAL Model Law and largely mirroring the grounds of Article V of the New York Convention. Two key points are especially relevant to the use of AI in the arbitral process and central to the objection in this case.

First, under Article 1721(1)(d), a party may argue that the award lacks proper reasoning, violating a core procedural guarantee under Belgian law. This requirement ensures that parties can understand the legal and factual basis for the tribunal’s decision and respond accordingly. In this instance, however, the award’s reliance on opaque, AI-generated conclusions—particularly from “black box” systems—renders the reasoning inaccessible and legally inadequate. The EU AI Act further reinforces this objection. Articles 13, 16, and 17 require transparency, traceability, and documentation for high-risk AI systems. Meanwhile, Article 86 grants a limited right to explanation for affected persons where a deployer’s decision is based on Annex III systems and produces legal effects. If an award fails to meet these standards, it may not align with Belgian procedural norms.

Second, under Article 1721(1)(e), a party may argue that the tribunal’s composition or procedure deviated from the parties’ agreement or the law of the seat. For instance, if the arbitration agreement anticipated adjudication by human arbitrators and the tribunal instead relied on AI tools that materially influenced its reasoning without disclosure or consent, this could constitute a procedural irregularity. According to Article 14 of the EU AI Act, there must be effective human oversight of high-risk AI systems. Where such oversight is lacking or merely formal and AI outputs are adopted without critical human assessment, the legitimacy of the proceedings may be significantly compromised. Belgian courts have consistently held that procedural deviations capable of affecting the outcome may justify refusal of recognition and enforcement.

Scenario 2: Public Policy under Article V(2)(b)

In this scenario, the court may refuse to enforce the award on its own initiative if it is found to be contrary to public policy under Article V(2)(b) of the New York Convention, Article 34(2)(b)(ii) of the UNCITRAL Model Law, or Article 1721(3) of the Belgian Judicial Code. These provisions allow courts to deny recognition and enforcement if the underlying procedure or outcome conflicts with fundamental principles of justice in national and European legal systems.

In comparative international practice, public policy encompasses both substantive and procedural dimensions. A breach of fundamental and widely recognized procedural principles that renders an arbitral decision incompatible with the core values and legal order of a state governed by the rule of law engages procedural public policy. Examples include violations of due process, lack of tribunal independence, breach of equality of arms, and other essential guarantees of fair adjudication.

In this case, the use of non-transparent AI systems may fall within this category. If a tribunal relies on these tools without disclosing their use or without providing understandable justifications, the process could violate Article 47 of the Charter of Fundamental Rights of the European Union, which guarantees the right to a fair and public hearing before an independent and impartial tribunal. This issue, along with case law, could provide a reasonable basis for refusal based on public policy. When applying EU-relevant norms, Belgian courts are bound to interpret procedural guarantees in accordance with the Charter.

Comparative case law provides additional support. For instance, in Dutco, the French Cour de cassation annulled an arbitral award for violating the equality of arms in the tribunal’s constitution, which is a classic breach of procedural public policy. Similarly, in a 2016 decision under § 611(2)(5) ZPO, the Austrian Supreme Court annulled an award where the arbitral procedure was deemed incompatible with Austria’s fundamental legal values. These rulings confirm that courts may deny enforcement when arbitral mechanisms, especially those affecting the outcome, compromise procedural integrity.

Belgian courts have consistently maintained that recognition and enforcement must be refused where the underlying proceedings are incompatible with ordre public international belge, particularly where fundamental principles such as transparency, reasoned decision-making, and party equality are undermined. In this context, reliance on non-transparent AI—without adequate procedural safeguards—may constitute a violation of procedural public policy. Thus, enforcement may lawfully be denied ex officio under Article V(2)(b) of the New York Convention and Article 1721(3) of the Belgian Judicial Code, thereby preserving the integrity of both the Belgian and broader EU legal frameworks. Ultimately, courts retain wide discretion under public policy grounds to determine whether or not to enforce AI-assisted awards.

The EU AI Act as a Global Regulatory Model?

The EU has a proven history of establishing global legal benchmarks—rules that, while originating in Europe, shape laws and practices far beyond its borders. The GDPR stands as the clearest example of this. Its extraterritorial scope, strict compliance obligations, and enforcement mechanisms have inspired countries ranging from Brazil to Japan to adopt similar data protection frameworks.

In arbitration, a comparable pattern could emerge. If EU courts apply the EU AI Act’s high-risk requirements when deciding on the recognition and enforcement of arbitral awards, other jurisdictions may adopt comparable standards, prompting convergence in AI governance across dispute resolution systems. Conversely, inconsistent enforcement approaches could encourage fragmentation rather than harmonization. Regardless, the Act’s influence is already being felt beyond Europe, compelling arbitration stakeholders to address emerging questions regarding procedural legitimacy, technological oversight, and cross-border enforceability.

Conclusion

The interplay between the EU AI Act and the enforcement of arbitral awards underscores how technological regulation is shaping the concept of procedural fairness in cross-border dispute resolution. Whether the Act becomes a catalyst for global standards or a source of jurisdictional friction, parties and institutions cannot overlook its requirements. As AI tools become more integrated into arbitral practice, compliance will evolve from a regulatory obligation into a strategic necessity for ensuring the enforceability of awards in key jurisdictions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...