Prohibited AI Practices Under the EU AI Act: Key Insights

European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act

In February 2025, the European Commission published significant guidelines aimed at clarifying key aspects of the EU Artificial Intelligence Act (“AI Act”). The two sets of guidelines focus on the definition of an AI system and prohibited AI practices. These guidelines are essential for understanding the obligations that came into effect on February 2, 2025, covering definitions, AI literacy requirements, and prohibitions on certain AI practices.

Overview of the Guidelines

The guidelines on prohibited AI practices are extensive, spanning over 100 pages, and offer detailed instructions on interpreting and applying the eight prohibited AI practices specified in Article 5 of the AI Act. Article 5 identifies specific practices that are banned concerning AI systems.

Importantly, the guidelines elucidate the relationship between high-risk AI systems (regulated under the Act) and prohibited practices. In certain instances, the use of a high-risk AI system may qualify as a prohibited practice, while AI systems that fall under an exception in Article 5 may qualify as high-risk under Article 6.

Key Takeaways from the Guidelines

  • Personalized Ads: Article 5(1)(a) prohibits the use of AI systems that employ subliminal techniques or manipulative strategies to distort behavior. The guidelines clarify that personalizing ads based on user preferences is not inherently manipulative, provided it does not use deceptive techniques that undermine individual autonomy.
  • Lawful Persuasion: Article 5(1)(b) restricts exploiting vulnerabilities of individuals or groups. To be classified as a prohibited practice, any distortion must extend beyond lawful persuasion, which is characterized by transparency and informed consent.
  • Vulnerability and Addiction: Article 5(1)(b) specifically forbids exploiting vulnerabilities based on age, disability, or socio-economic status. Examples include AI systems that create addictive rewards to promote excessive usage or target vulnerable populations with scams.
  • Profiling and Social Scoring: Article 5(1)(c) prohibits AI systems that evaluate or classify individuals based on their social behavior or characteristics. Certain profiling practices can be deemed unacceptable, such as an insurance company using financial information to determine life insurance eligibility.
  • Predictive Policing: Article 5(1)(d) bans AI systems that predict criminal behavior based solely on profiling. This prohibition applies to both law enforcement and private actors acting on behalf of law enforcement.
  • Facial Image Scraping: Article 5(1)(e) prohibits creating facial recognition databases through indiscriminate scraping of images. However, databases used for AI model training that do not identify individuals are exempt.
  • Emotion Recognition in the Workplace: Article 5(1)(f) restricts inferring emotions in workplace settings, with exceptions clarified for customer interactions where tracking emotions is permissible.

Interpreting the Scope of the AI Act

The guidelines offer insights into the interpretation of terms such as “placing on the market”, “putting into service”, and “use”. These prohibitions apply to both general-purpose AI systems and those with specific intents. Providers of general-purpose AI systems are expected to implement safeguards to prevent misuse.

Importantly, Article 2(8) of the AI Act states that the Act does not apply to research and development activities conducted prior to the market introduction of AI systems, allowing developers to experiment with functionalities that might be seen as manipulative.

Conclusion

The European Commission’s guidelines are a critical step in regulating AI technologies, ensuring that developers and companies understand the limitations and responsibilities associated with AI applications. As the landscape of technology continues to evolve, adherence to these regulations will be essential to maintain ethical practices in AI development.

More Insights

10-Year Ban on State AI Laws: Implications and Insights

The US House of Representatives has approved a budget package that includes a 10-year moratorium on enforcing state AI laws, which has sparked varying opinions among experts. Many argue that this...

AI in the Courts: Insights from 500 Cases

Courts around the world are already regulating artificial intelligence (AI) through various disputes involving automated decisions and data processing. The AI on Trial project highlights 500 cases...

Bridging the Gap in Responsible AI Implementation

Responsible AI is becoming a critical business necessity, especially as companies in the Asia-Pacific region face rising risks associated with emergent AI technologies. While nearly half of APAC...

Leading AI Governance: The Legal Imperative for Safe Innovation

In a recent interview, Brooke Johnson, Chief Legal Counsel at Ivanti, emphasizes the critical role of legal teams in AI governance, advocating for cross-functional collaboration to ensure safe and...

AI Regulations: Balancing Innovation and Safety

The recent passage of the One Big Beautiful Bill Act by the House of Representatives includes a provision that would prevent states from regulating artificial intelligence for ten years. This has...

Balancing Compliance and Innovation in Financial Services

Financial services companies face challenges in navigating rapidly evolving AI regulations that differ by jurisdiction, which can hinder innovation. The need for compliance is critical, as any misstep...

Avoiding AI Governance Pitfalls

As AI-infused tools become increasingly prevalent in enterprises, the importance of effective AI governance has grown. However, many businesses are falling short in their governance efforts, often...

North America’s Struggle with AI Compliance Adoption

A recent report reveals that North American firms are lagging behind their EMEA counterparts in AI adoption for compliance, with 56.3% hesitant to embrace the technology. In contrast, over 70% of EMEA...

Trump Administration Shifts Focus to AI Standards and Innovation

The Trump administration has rebranded the AI Safety Institute to the Center for AI Standards and Innovation, signaling a shift towards rapid technology development. Commerce Secretary Howard Lutnick...