EU Bans AI Tools Used for Social Scoring and Predictive Policing
The European Union (EU) has officially implemented stringent regulations under its landmark AI Act, which bans several AI systems classified as ‘unacceptable risk.’ These regulations took effect on February 2, 2025, marking a significant shift in how AI technologies are utilized within member states.
Categories of Banned AI Systems
Under the new legislation, the following categories of AI systems have been deemed illegal due to their potential threats to public safety, livelihoods, and individual rights:
- Social scoring systems
- Emotion recognition AI systems in workplaces and educational institutions
- Individual criminal offense risk assessment or prediction tools
- Harmful AI-based manipulation and deception tools
- AI tools exploiting vulnerabilities
Additionally, practices such as the untargeted scraping of the internet and CCTV material to create or expand facial recognition databases are also banned. The use of biometric categorization to deduce protected characteristics and real-time biometric identification for law enforcement in publicly accessible areas is prohibited.
Potential Penalties for Non-Compliance
Companies found violating the AI Act face severe penalties, with fines reaching up to €35 million (approximately $35.8 million) or 7% of their global annual revenues, whichever is higher. This is intended to enforce compliance and ensure that organizations prioritize ethical considerations in their AI deployments.
Exemptions and Criticisms
While the AI Act aims to mitigate risks associated with harmful AI technologies, critics highlight that several exemptions permit law enforcement and migration authorities to utilize AI for tracking terrorism suspects. This has raised concerns about the implications for civil liberties and privacy rights.
Implementation Timeline and Future Regulations
The EU’s AI Act is a pioneering regulatory framework for artificial intelligence, with various provisions rolling out in phases. A critical component of this framework involves ensuring technology literacy among staff within affected organizations, which became effective shortly after the initial launch.
Looking ahead, governance rules and obligations for tech companies developing general-purpose AI models will come into force by August 2, 2025. Notably, these models include large language models (LLMs) like OpenAI’s GPT series. Companies involved in high-risk AI systems in sectors such as education, medicine, and transport will have an extended transition period until August 2, 2027.
Conclusion
The EU’s proactive stance on regulating AI tools reflects a growing recognition of the need for ethical oversight in technology. As AI continues to evolve, the implications of these regulations will be critical in shaping the future landscape of artificial intelligence across Europe.