EU Regulators Outline Eight Harmful AI Practices to be Banned
On February 6, 2025, EU regulators unveiled a significant step in the governance of artificial intelligence (AI) with the introduction of the AI Act, aimed at mitigating the risks posed by certain AI technologies. The Act identifies eight specific practices deemed too dangerous for implementation and seeks to ensure that innovation does not come at the expense of public safety.
Introduction to the AI Act
The AI Act was adopted last year, with the intent of regulating the burgeoning field of AI technology while fostering an environment conducive to innovation in Europe. As the United States and China accelerate their advancements in AI, the EU faces the challenge of establishing a regulatory framework that prevents misuse without stifling development.
Scope and Enforcement
While the provisions banning harmful AI applications came into effect immediately, EU member states have until August to appoint regulators who will enforce these new rules. This act represents a comprehensive approach to AI regulation, taking a risk-based perspective. Companies that develop high-risk AI systems will face stricter obligations for authorization within the EU.
Key Prohibited Practices
The following eight practices have been identified as unacceptable and are thus prohibited under the AI Act:
1. Real-Time Biometric Identification
The use of AI systems equipped with cameras for real-time biometric identification in public spaces for law enforcement is banned. This measure aims to prevent the arbitrary detention of individuals without substantial evidence. Exceptions may apply to specific threats, such as terrorism.
2. Social Scoring
AI tools that rank individuals based on personal data unrelated to risk—such as origin, skin color, or social media behavior—are prohibited. This rule aims to prevent discrimination in contexts like loan approvals and social welfare assessments.
3. Criminal Risk Assessment
Law enforcement agencies are barred from using AI to predict an individual’s likelihood of criminal behavior based solely on biometric data. Such assessments must consider objective and verifiable facts related to a person’s actions, rather than relying on facial features or other personal characteristics.
4. Scraping Facial Images
The Act prohibits tools that indiscriminately scrape the internet and CCTV footage to create extensive facial recognition databases. This practice is seen as a form of state surveillance and raises significant privacy concerns.
5. Emotion Detection
Organizations are forbidden from deploying AI systems that detect emotions through webcams or voice recognition technology in workplaces and educational settings, protecting individuals from invasive monitoring.
6. Behavior Manipulation
The use of deceptive or subliminal AI systems designed to manipulate user behavior—such as pushing consumers towards purchases—is outlawed under the new regulations.
7. Exploitation of Vulnerabilities
AI-driven toys and systems designed for children, the elderly, or other vulnerable populations that encourage harmful behaviors are prohibited, safeguarding these groups from exploitation.
8. Inference of Political Opinions
AI systems that attempt to deduce individuals’ political beliefs or sexual orientation from biometric data analysis are not permitted within the EU, reinforcing the commitment to personal privacy and freedom.
Consequences for Non-Compliance
Fines for companies failing to adhere to these regulations can reach up to seven percent of their worldwide annual revenue or up to €35 million (approximately RM164 million), whichever is higher. This stringent penalty framework underscores the EU’s commitment to enforcing these new standards.
Conclusion
The EU’s proactive approach in outlining these prohibited AI practices sets a precedent for global AI governance. As the technology continues to evolve, regulatory bodies must balance innovation with ethical considerations and public safety.