EU’s New Guidelines on Banned AI Practices

EU Commission Issues Guidelines on Prohibited AI Practices Under EU AI Act

On February 4, 2025, the European Commission (EC) issued draft guidelines clarifying the AI practices that are prohibited under the European Union’s (EU) Artificial Intelligence (AI) Act. While non-binding, these guidelines provide valuable clarifications and practical examples to assist businesses in navigating their obligations under the AI Act. The EC has approved the draft guidelines, with formal adoption expected in the near term.

Background

On February 2, 2025, the AI Act’s provisions on prohibited AI practices became effective, alongside other provisions on AI literacy. Article 5 of the AI Act prohibits certain AI practices deemed to raise unacceptable risks, such as AI systems that manipulate or exploit individuals, perform social scoring, or infer individuals’ emotions in workplaces or educational settings. This ban applies to both companies offering such AI systems and those utilizing them.

Prohibited AI Practices

Below is an overview of the main prohibitions under the AI Act, as interpreted by the guidelines:

  1. Social Scoring: The AI Act prohibits offering or using AI systems that assess individuals’ social behaviors to determine their treatment in unrelated contexts. For instance, AI systems used to recommend insurance premiums or assess creditworthiness could be classified as social scoring if based on unrelated personal characteristics. Individual ratings by users, such as ratings on car-sharing platforms, fall outside this prohibition.
  2. Manipulation and Exploitation: The Act bans AI systems that use subliminal techniques or exploit individual vulnerabilities to influence behavior and cause harm. This includes using AI in games to encourage excessive play among children. However, AI systems that operate transparently and respect user autonomy, such as those designed for language learning, are permitted.
  3. Facial Recognition and Biometric Identification: The guidelines prohibit building facial recognition databases through untargeted scraping of images from the internet or CCTV footage. For example, scraping facial images from social media for database creation is banned, while scraping non-facial data or using facial databases for AI model training without person identification is allowed.
  4. Emotion Recognition in Workplaces and Educational Institutions: AI use for emotion recognition in workplaces and educational settings is generally prohibited. This includes tracking emotions through webcams in call centers or using AI to infer student interest in education. However, exceptions exist for medical and safety purposes, such as detecting fatigue in pilots.
  5. Biometric Categorization: Categorizing individuals based on sensitive attributes (e.g., race, political opinions) using biometric data is forbidden. For instance, categorizing individuals for political messaging based on their pictures is prohibited, while technical categorizations necessary for commercial services, like facial filters, are allowed.

Responsibilities for AI Providers

The guidelines state that AI system providers are responsible for ensuring their systems are not “reasonably likely” to be used for prohibited purposes. This includes adopting safeguards to prevent foreseeable misuse, such as technical safeguards and user controls. Providers are expected to clearly state the prohibited uses of their AI systems in their terms and provide guidance on appropriate oversight.

Continuous compliance is essential, involving ongoing monitoring and updates to AI systems. In cases where misuse occurs, providers are expected to take appropriate measures.

Next Steps

Companies engaging in prohibited AI practices may face significant fines, potentially reaching up to EUR 35 million or seven percent of their global annual turnover, whichever is higher. The first enforcement actions are anticipated in the latter half of 2025 as EU countries finalize their enforcement regimes. Companies offering or using AI in the EU should review their AI systems and terms in light of these guidelines and address any compliance gaps promptly.

In summary, the EU’s guidelines on prohibited AI practices represent a crucial step towards ensuring responsible AI development and usage, emphasizing the need for compliance and ethical considerations in the rapidly evolving AI landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...