Study on the EU’s AI Act: Consultation on Definitions and Prohibited Uses
The European Union has initiated a consultation aimed at gathering insights on crucial elements of its Artificial Intelligence Regulations, specifically targeting the definitions of AI systems and the applications that are prohibited. This consultation is a component of the EU’s wider strategy to establish compliance guidelines for the newly implemented AI law, which came into effect on August 1, 2024.
Consultation Details
The consultation period extends until December 11, 2024. Stakeholders are encouraged to provide feedback on the definitions of AI systems and suggest examples of software that should be excluded from the law’s scope. This initiative follows a landmark agreement reached in December 2023 regarding the AI Act.
Overview of the AI Act
The AI Act introduces a comprehensive regulatory framework impacting businesses globally across various sectors. Its primary objectives are to promote human-centric and trustworthy AI, while ensuring high safety standards and protecting fundamental rights.
Classification of AI Systems
Under the AI Act, AI systems are categorized into four risk levels:
- Minimal Risk
- High Risk
- Unacceptable Risk
- Specific Transparency Risk
High-risk systems include applications that are safety-critical in sectors such as critical infrastructure, employment, law enforcement, and judicial processes. Limited-risk AI systems, like chatbots and digital assistants, are required to adhere to certain transparency obligations.
Transparency Requirements
Beginning August 2, 2026, providers must inform users when they are engaging with AI systems, unless it is self-evident from the context. Additional transparency obligations apply to systems that involve:
- Emotion Recognition
- Biometric Categorization
- Deepfakes
This reflects an increasing concern regarding the manipulation of media and personal data through AI technologies.
Prohibited AI Applications
The Act explicitly bans specific AI applications that are considered to pose unacceptable risks, including:
- China-style social scoring systems
- Unrestricted facial recognition in public areas
The ongoing consultation welcomes comprehensive feedback on these prohibited uses, with the European Commission set to publish guidance on defining AI systems and banned applications in early 2025.
Implementation Framework
For effective implementation of the AI Act, EU Member States are required to establish or designate three types of authorities:
- Market Surveillance Authorities
- Notifying Authorities
- National Public Authorities responsible for enforcing fundamental rights
Member states have the discretion to structure these authorities in a manner that aligns with their national regulatory frameworks, as illustrated by Spain’s centralized approach and Finland’s proposed decentralized model. This flexibility aims to ensure consistent enforcement across the EU while accommodating regional regulations.
Conclusion
The EU’s consultation represents a crucial opportunity for stakeholders in the AI sector to influence the regulatory landscape. By providing feedback on the definitions and applications of AI systems, participants can help shape a framework that balances innovation with the need for safety and ethical standards in artificial intelligence.