EU’s Bold Move: Eight AI Practices Banned for Safety

EU Regulators Outline Eight Harmful AI Practices to be Banned

On February 6, 2025, EU regulators unveiled a significant step in the governance of artificial intelligence (AI) with the introduction of the AI Act, aimed at mitigating the risks posed by certain AI technologies. The Act identifies eight specific practices deemed too dangerous for implementation and seeks to ensure that innovation does not come at the expense of public safety.

Introduction to the AI Act

The AI Act was adopted last year, with the intent of regulating the burgeoning field of AI technology while fostering an environment conducive to innovation in Europe. As the United States and China accelerate their advancements in AI, the EU faces the challenge of establishing a regulatory framework that prevents misuse without stifling development.

Scope and Enforcement

While the provisions banning harmful AI applications came into effect immediately, EU member states have until August to appoint regulators who will enforce these new rules. This act represents a comprehensive approach to AI regulation, taking a risk-based perspective. Companies that develop high-risk AI systems will face stricter obligations for authorization within the EU.

Key Prohibited Practices

The following eight practices have been identified as unacceptable and are thus prohibited under the AI Act:

1. Real-Time Biometric Identification

The use of AI systems equipped with cameras for real-time biometric identification in public spaces for law enforcement is banned. This measure aims to prevent the arbitrary detention of individuals without substantial evidence. Exceptions may apply to specific threats, such as terrorism.

2. Social Scoring

AI tools that rank individuals based on personal data unrelated to risk—such as origin, skin color, or social media behavior—are prohibited. This rule aims to prevent discrimination in contexts like loan approvals and social welfare assessments.

3. Criminal Risk Assessment

Law enforcement agencies are barred from using AI to predict an individual’s likelihood of criminal behavior based solely on biometric data. Such assessments must consider objective and verifiable facts related to a person’s actions, rather than relying on facial features or other personal characteristics.

4. Scraping Facial Images

The Act prohibits tools that indiscriminately scrape the internet and CCTV footage to create extensive facial recognition databases. This practice is seen as a form of state surveillance and raises significant privacy concerns.

5. Emotion Detection

Organizations are forbidden from deploying AI systems that detect emotions through webcams or voice recognition technology in workplaces and educational settings, protecting individuals from invasive monitoring.

6. Behavior Manipulation

The use of deceptive or subliminal AI systems designed to manipulate user behavior—such as pushing consumers towards purchases—is outlawed under the new regulations.

7. Exploitation of Vulnerabilities

AI-driven toys and systems designed for children, the elderly, or other vulnerable populations that encourage harmful behaviors are prohibited, safeguarding these groups from exploitation.

8. Inference of Political Opinions

AI systems that attempt to deduce individuals’ political beliefs or sexual orientation from biometric data analysis are not permitted within the EU, reinforcing the commitment to personal privacy and freedom.

Consequences for Non-Compliance

Fines for companies failing to adhere to these regulations can reach up to seven percent of their worldwide annual revenue or up to €35 million (approximately RM164 million), whichever is higher. This stringent penalty framework underscores the EU’s commitment to enforcing these new standards.

Conclusion

The EU’s proactive approach in outlining these prohibited AI practices sets a precedent for global AI governance. As the technology continues to evolve, regulatory bodies must balance innovation with ethical considerations and public safety.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...