Prohibited AI Practices Under the EU Artificial Intelligence Act

European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

In February 2025, the European Commission published two sets of guidelines to clarify key aspects of the EU Artificial Intelligence Act (“AI Act”): the Guidelines on the definition of an AI system and the Guidelines on prohibited AI practices. These guidelines are intended to provide guidance on the set of AI Act obligations that started to apply on February 2, 2025, which includes the definitions section of the AI Act, obligations relating to AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on prohibited AI practices. The Guidelines are well over 100 pages long and provide detailed guidance on how to interpret and apply each of the eight prohibited AI practices listed in Article 5 of the AI Act. Article 5 identifies practices that are prohibited in relation to AI systems. The Guidelines clarify the relationship between high-risk AI systems (regulated under the Act) and prohibited practices, explaining that in some cases, the use of a high-risk AI system may qualify as a prohibited practice.

Key Prohibited Practices

Key takeaways from the Guidelines include the following:

  • Personalised Ads: Article 5(1)(a) prohibits the use of an AI system that deploys subliminal techniques or purposefully manipulative techniques that distort the behavior of a person. The Guidelines indicate that personalizing ads based on user preferences is not inherently manipulative, provided it does not exploit vulnerabilities.
  • Lawful Persuasion: Article 5(1)(b) prohibits exploiting the vulnerabilities of individuals. The Guidelines explain that lawful persuasion occurs when an AI system operates transparently and facilitates informed consent, distinguishing it from prohibited practices.
  • Vulnerability and Addiction: Article 5(1)(b) specifically prohibits exploiting a person’s vulnerability based on age, disability, or socio-economic status. Examples include AI systems that create addictive reward schedules or target vulnerable individuals with deceptive offers.
  • Profiling and Social Scoring: Article 5(1)(c) prohibits AI systems that evaluate or classify individuals based on their social behavior. The Guidelines clarify that certain types of profiling might be acceptable as long as they comply with legal frameworks.
  • Predictive Policing: Article 5(1)(d) prohibits AI systems that make risk assessments of individuals based solely on profiling. This restriction applies to both law enforcement and private actors acting on behalf of law enforcement.
  • Facial Image Scraping: Article 5(1)(e) prohibits creating facial recognition databases through untargeted scraping of images. However, databases not used for recognition purposes are excluded.
  • Emotion Recognition in the Workplace: Article 5(1)(f) prohibits inferring emotions of individuals in workplace settings, subject to certain exceptions.

Interpreting the Scope of the AI Act

In addition to detailing each prohibited AI practice, the Guidelines consider how to interpret the scope of the AI Act:

  • Defining “Placing on the Market”, “Putting into Service”, and “Use”: The Guidelines clarify that these terms encompass a broad range of activities, including making AI systems available through various means and in-house development.
  • Research and Development Exclusions: The AI Act does not apply to research conducted prior to the market introduction of AI systems. However, once a system is placed on the market, the Act applies.
  • Application to General-Purpose AI Systems: The prohibitions apply to both general-purpose AI systems and those with specific intended purposes. Providers are responsible for ensuring their systems do not engage in prohibited practices.

The Covington team continues to monitor regulatory developments on AI, regularly advising technology companies on regulatory and compliance issues. For inquiries about AI regulation or related matters, assistance is available.

More Insights

State AI Regulation: A Bipartisan Debate on Federal Preemption

The One Big Beautiful Bill Act includes a provision to prohibit state regulation of artificial intelligence (AI), which has drawn criticism from some Republicans, including Congresswoman Marjorie...

IBM Launches Groundbreaking Unified AI Security and Governance Solution

IBM has introduced a unified AI security and governance software that integrates watsonx.governance with Guardium AI Security, claiming to be the industry's first solution for managing risks...

Ethical AI: Building Responsible Governance Frameworks

As AI becomes integral to decision-making across various industries, establishing robust ethical governance frameworks is essential to address challenges such as bias and lack of transparency...

Reclaiming Africa’s AI Future: A Call for Sovereign Innovation

As Africa celebrates its month, it is crucial to emphasize that the continent's future in AI must not merely replicate global narratives but rather be rooted in its own values and contexts. Africa is...

Mastering AI and Data Sovereignty for Competitive Advantage

The global economy is undergoing a transformation driven by data and artificial intelligence, with the digital economy projected to reach $16.5 trillion by 2028. Organizations are urged to prioritize...

Pope Leo XIV: Pioneering Ethical Standards for AI Regulation

Pope Leo XIV has emerged as a key figure in global discussions on AI regulation, emphasizing the need for ethical measures to address the challenges posed by artificial intelligence. He aims to...

Empowering States to Regulate AI

The article discusses the potential negative impact of a proposed moratorium on state-level AI regulation, arguing that it could stifle innovation and endanger national security. It emphasizes that...

AI Governance Made Easy: Wild Tech’s Innovative Solution

Wild Tech has launched a new platform called Agentic Governance in a Box, designed to help organizations manage AI sprawl and improve user and data governance. This Microsoft-aligned solution aims to...

Unified AI Security: Strengthening Governance for Agentic Systems

IBM has introduced the industry's first software to unify AI security and governance for AI agents, enhancing its watsonx.governance and Guardium AI Security tools. These capabilities aim to help...