Civil Rights Groups Urge EU to Prioritize Human Rights in AI Act Implementation Guidelines
A coalition of over two dozen civil society organizations and individuals has called on the European Union to prioritize human rights in upcoming guidelines for the implementation of the EU AI Act. These guidelines, to be issued by the newly established AI Office, will help interpret the Act’s scope and prohibited practices following its adoption in 2024.
Overview of the AI Act
The AI Act establishes definitions for artificial intelligence systems and outlines prohibited practices, including:
- Remote biometric identification
- Social scoring
- Predictive policing
- Emotion recognition
This legislation follows the EU’s extensive consultation process on defining AI systems and their prohibited uses. The coalition emphasizes the need to include simpler AI systems under the Act’s scope to prevent potential loopholes based on technical simplicity.
Key Stakeholders and Their Concerns
Among the signatories of the statement are prominent organizations such as Amnesty International, Privacy International, Access Now, and Statewatch, alongside various academic experts. These stakeholders emphasize that the EU Charter of Fundamental Rights should serve as the central guiding basis for the Act’s implementation, particularly in light of growing concerns about biometric surveillance and predictive policing technologies.
Recommendations for Implementation
The organizations advocate for a broad interpretation of the Act’s prohibitions to prevent various forms of harm, including discrimination, racism, and prejudice. They specifically highlight the importance of clearly defining and robustly enforcing restrictions on:
- Social scoring
- Biometric surveillance
These restrictions are to be applied across multiple contexts, including welfare, migration, education, and law enforcement. This stance builds on recent European Court of Justice rulings that have emphasized strict justification requirements for biometric data collection by authorities.
Expanding Definitions and Addressing Loopholes
The coalition’s recommendations also include:
- Expanding the definition of biometric categorization to encompass deductions about ethnicity, gender identity, and other personal characteristics.
- Addressing loopholes that might permit retrospective remote biometric identification or emotion recognition systems.
This follows recent research demonstrating both the capabilities and limitations of emotion recognition technologies.
Concerns About the Consultation Process
Regarding the consultation process, participating organizations have expressed concerns about:
- Lack of transparency
- Limited timeframes
- Insufficient inclusion of diverse perspectives
They recommend more comprehensive involvement of civil society stakeholders in future AI Act-related consultations, building on the framework established by COMPL-AI, the first compliance evaluation framework for Generative AI models under the Act.
In conclusion, these recommendations underscore the urgent need for the EU to ensure that human rights are at the forefront of AI regulation, particularly amidst rapid technological advancements and growing surveillance capabilities.