European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act
In February 2025, the European Commission published significant guidelines aimed at clarifying key aspects of the EU Artificial Intelligence Act (“AI Act”). The two sets of guidelines focus on the definition of an AI system and prohibited AI practices. These guidelines are essential for understanding the obligations that came into effect on February 2, 2025, covering definitions, AI literacy requirements, and prohibitions on certain AI practices.
Overview of the Guidelines
The guidelines on prohibited AI practices are extensive, spanning over 100 pages, and offer detailed instructions on interpreting and applying the eight prohibited AI practices specified in Article 5 of the AI Act. Article 5 identifies specific practices that are banned concerning AI systems.
Importantly, the guidelines elucidate the relationship between high-risk AI systems (regulated under the Act) and prohibited practices. In certain instances, the use of a high-risk AI system may qualify as a prohibited practice, while AI systems that fall under an exception in Article 5 may qualify as high-risk under Article 6.
Key Takeaways from the Guidelines
- Personalized Ads: Article 5(1)(a) prohibits the use of AI systems that employ subliminal techniques or manipulative strategies to distort behavior. The guidelines clarify that personalizing ads based on user preferences is not inherently manipulative, provided it does not use deceptive techniques that undermine individual autonomy.
- Lawful Persuasion: Article 5(1)(b) restricts exploiting vulnerabilities of individuals or groups. To be classified as a prohibited practice, any distortion must extend beyond lawful persuasion, which is characterized by transparency and informed consent.
- Vulnerability and Addiction: Article 5(1)(b) specifically forbids exploiting vulnerabilities based on age, disability, or socio-economic status. Examples include AI systems that create addictive rewards to promote excessive usage or target vulnerable populations with scams.
- Profiling and Social Scoring: Article 5(1)(c) prohibits AI systems that evaluate or classify individuals based on their social behavior or characteristics. Certain profiling practices can be deemed unacceptable, such as an insurance company using financial information to determine life insurance eligibility.
- Predictive Policing: Article 5(1)(d) bans AI systems that predict criminal behavior based solely on profiling. This prohibition applies to both law enforcement and private actors acting on behalf of law enforcement.
- Facial Image Scraping: Article 5(1)(e) prohibits creating facial recognition databases through indiscriminate scraping of images. However, databases used for AI model training that do not identify individuals are exempt.
- Emotion Recognition in the Workplace: Article 5(1)(f) restricts inferring emotions in workplace settings, with exceptions clarified for customer interactions where tracking emotions is permissible.
Interpreting the Scope of the AI Act
The guidelines offer insights into the interpretation of terms such as “placing on the market”, “putting into service”, and “use”. These prohibitions apply to both general-purpose AI systems and those with specific intents. Providers of general-purpose AI systems are expected to implement safeguards to prevent misuse.
Importantly, Article 2(8) of the AI Act states that the Act does not apply to research and development activities conducted prior to the market introduction of AI systems, allowing developers to experiment with functionalities that might be seen as manipulative.
Conclusion
The European Commission’s guidelines are a critical step in regulating AI technologies, ensuring that developers and companies understand the limitations and responsibilities associated with AI applications. As the landscape of technology continues to evolve, adherence to these regulations will be essential to maintain ethical practices in AI development.