AI’s Role in Shaping Anticompetitive Practices in EU Law

Artificial Intelligence and Anticompetitive Agreements in EU Law

The rise of artificial intelligence (AI) and its widespread availability raise questions regarding its potential use in violating EU competition law. This issue is complex due to two characteristics of AI systems highlighted under the EU AI Act: (1) they operate with varying levels of autonomy and (2) they infer from the input they receive how to generate outputs such as predictions, recommendations, or decisions that can influence physical or virtual environments.

This study explores the role of AI in anticompetitive horizontal and vertical agreements under Article 101 of the Treaty on the Functioning of the EU, as well as how AI could assist regulators in enforcing competition law rules.

Horizontal Agreements

AI may be used in cartels or hub-and-spoke arrangements. While autonomous price coordination does not violate EU competition law, it raises interesting questions for the future. Horizontal agreements in the AI market would be similar to those in other markets. However, due to AI’s high importance and growing demand, and the scarcity of talent and AI skills, no-poach agreements reducing competition in the labor market might be particularly noteworthy.

  • Cartels: Explicit collusion is the clearest violation of EU competition rules, as competitors communicate directly to agree on anticompetitive practices such as price-fixing or market sharing. However, once an agreement is established, participants might deviate from the plan to achieve favorable outcomes for themselves. AI can be used to address this issue and facilitate the formation of stable cartels. Typically, cartel participants may use AI to automatically implement agreements, thereby reducing the need for direct communication. AI can also be used to monitor individual behaviors to ensure cartel stability. These cases do not present new legal challenges, as competition law rules apply as usual. The main difficulty lies in detecting the cartel and understanding the use of AI for such anticompetitive purposes. For instance, in 2016, the UK Competition Markets Authority found that online sellers of posters and frames had used automated repricing software to monitor and adjust their prices, ensuring that neither was undercutting the other.
  • Hub and Spoke: Anticompetitive information exchange can occur indirectly, typically where competitors are aware that the price is set by a third-party AI-based platform and do not distance themselves from such practice. For example, in Eturas, travel agencies were suspected of applying a common cap on discounts through a third-party online booking platform. The Court of Justice of the EU confirmed that online platform terms setting a discount cap can lead to anticompetitive collusion with travel agencies. Travel agencies could be presumed to have participated in such collusion if they were aware of anticompetitive amendments to the terms unless they distanced themselves.
  • Autonomous Price Coordination: Competitors may independently employ distinct pricing AI tools using their own algorithms and datasets, through which they learn and adapt their price-setting strategies. Various experiments suggest that when such AI systems interact in a market environment, they tend to reach a price equilibrium that is higher than competitive prices. However, these experiments remain theoretical, and the evidence of algorithmic tacit collusion is limited. Competition authorities and academics continue to investigate this issue. Although tacit collusion does not currently fall within the scope of EU competition law, this might need reconsideration soon as AI becomes increasingly sophisticated.

Vertical Agreements

Most vertical agreements, which involve competitors at different supply chain levels, do not breach EU competition law. The European Commission’s Guidelines on Vertical Restraints clarify that this is because the complementary nature of the activities performed by the parties involved in such agreements often means that pro-competitive actions by one party will benefit the other, ultimately benefiting consumers. However, certain vertical agreements may raise competition concerns under EU law.

  • Input Foreclosure: There is potential for anticompetitive vertical arrangements resulting in the foreclosure of critical inputs to downstream players. Typically, if two firms in different segments of the AI supply chain agree to grant each other exclusive access to a valuable resource, it could hinder other competitors from developing competitive products. An example of this situation could be an AI chip manufacturer and an AI developer agreeing to provide each other with exclusive access to their respective semiconductor technology and advanced training datasets, foreclosing rival AI firms from obtaining these critical inputs.
  • Hardcore Restrictions: Hardcore restrictions in vertical agreements are almost always illegal. Specifically, using AI to monitor or enforce resale price maintenance agreements, or exclusive or selective distribution systems, can violate EU competition law.
  • Resale Price Maintenance Agreements: Sellers are prohibited from setting a fixed or minimum sale price for buyers. The increasing use of AI-driven price-monitoring systems by sellers in online markets enhances market transparency through price recommendations. However, these systems are not inherently illegal. Buyers still have the freedom to engage in competitive price strategies. The use of such systems only becomes illegal when buyers and sellers agree to turn recommended prices into mandatory ones.
  • Exclusive or Selective Distribution Systems: AI-powered monitoring mechanisms can serve as auxiliary enforcing tools for implementing exclusive or selective distribution systems. For example, AI can be used to monitor compliance with restrictions on the territory in which or the customers to whom the buyer or its customers may sell.

Enforcement

Competition authorities are considering using AI to enhance case management and assist them in investigations by analyzing data and expanding e-discovery capabilities. This could help reduce the length of investigations, thereby limiting costs and uncertainty for companies under investigation. However, deploying AI for such purposes will likely take time, as it must be carefully designed and tested to ensure appropriate legal safeguards, including regarding the rights of defense, the right to good administration, and compliance with EU data protection and AI regulations.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...