Human Rights and Justice in the AI Act Implementation
The upcoming guidelines for the implementation of the Artificial Intelligence (AI) Act by the European Commission are crucial for ensuring that human rights and justice are prioritized in the development and deployment of AI systems. These guidelines will help shape how AI systems are defined, what types are allowed, and which should be explicitly prohibited.
Importance of the Guidelines
As the consultation on the AI Act closed on December 11, 2024, it became evident that various loopholes exist within the final Act concerning the protection of fundamental rights. Particularly, the guidelines must clarify that fundamental rights are the central guiding basis for effective enforcement of the AI Act.
Key Recommendations for Guidelines
To ensure a fundamental rights-based approach, the AI Office is urged to incorporate the following recommendations:
- Inclusion of Simple Systems: Systems that may be considered ‘simple’ should be explicitly included within the AI system definition. This is vital to prevent developers from bypassing the obligations of the AI Act through technical manipulations. For example, a transformation of an AI system into a rule-based system could circumvent regulations, despite maintaining similar functionalities and risks.
- Prohibition of Unacceptable Risks: The guidelines must clarify the prohibition of systems posing unacceptable risks to fundamental rights. This includes preventing the weaponization of technology against marginalized groups and unlawful mass biometric surveillance.
- Social Scoring Practices: There is a need to ban social scoring comprehensively, including existing practices in welfare and migration procedures. The guidelines must interpret “social behaviour” broadly to encompass various risk indicators.
- Predictive Policing: The guidelines should clarify that predicting the ‘risk of committing a criminal offence’ includes all systems that claim to predict a wide range of behaviors classified as criminal. This is crucial to cover systems that assess the likelihood of being registered in police databases.
- Facial Recognition Scraping: The current ban on non-targeted scraping of facial images must be more stringent to prevent misuse. Any exceptions must align with EU Court of Justice rulings, ensuring that facial images scraped from public sources cannot be linked to innocent individuals.
- Emotion Recognition Systems: The guidelines should delineate legitimate medical equipment from systems aimed at inferring emotions, which often lack scientific grounding and pose privacy risks.
- Remote Biometric Identification: Strengthening the language around remote biometric identification (RBI) is essential to prevent mass surveillance. The guidelines should specify that real-time RBI systems for export fall within the scope of the ban.
Interplay with Other Union Law
The guidelines must ensure that human rights law, particularly the EU Charter of Fundamental Rights, remains central to the implementation of the AI Act. All AI systems should be assessed in the broader context of discrimination, racism, and prejudice, serving a preventative purpose against potential harm.
Shortcomings in the Consultation Process
The shortcomings of the Commission’s consultation process, such as limited notice and inadequate feedback mechanisms, highlight the need for meaningful civil society engagement in future consultations related to the AI Act. The overall objective must be to amplify the voices of impacted communities and ensure their perspectives are reflected in policy developments.
Conclusion
As stakeholders involved in the AI Act implementation, civil society organizations expect the AI Office to uphold a rights-based enforcement of this legislation, prioritizing human rights over the interests of the AI industry. The forthcoming guidelines will play a pivotal role in shaping a future where AI technology serves the public good while safeguarding fundamental rights.