Understanding the EU Artificial Intelligence Act: Obligations for Limited-Risk AI Systems
The EU Artificial Intelligence Act (AI Act) exemplifies an advanced risk-based approach to the regulation of AI technologies within Europe. One of its distinguishing features is the detailed classification of various risk levels associated with AI systems. This study focuses on the obligations imposed on limited-risk AI systems, which benefit from a lighter regulatory touch compared to high-risk systems.
What are Limited-Risk AI Systems?
Limited-risk AI systems are defined as those that are likely to interact with individuals or generate content, posing specific risks of impersonation or deception. These systems are classified as limited-risk under the EU AI Act due to their lower likelihood of causing significant harm or violating fundamental rights.
Examples of limited-risk AI systems include:
- Chatbots and digital assistants that interact directly with users.
- Systems generating synthetic audio, image, video, or text content, such as conversational agents.
- Technologies that create or manipulate content leading to deep fakes.
- Emotion recognition and biometric categorization systems used in various sectors.
Transparency Obligations
Transparency obligations for limited-risk AI systems differ for providers and deployers of such systems, ensuring that users are adequately informed about the nature of their interactions with AI.
Provider Obligations
According to Articles 50(1) and 50(2) of the AI Act, providers must:
- Ensure that AI systems designed for direct interaction with individuals inform users that they are engaging with an AI system, except in cases where this is evident to a cautious observer.
- Mark the outputs of AI systems generating synthetic content in a machine-readable format, ensuring they are detectable as artificially generated or manipulated. Providers must consider the following factors when implementing this requirement:
- Effectiveness: The solutions must reliably mark and detect synthetic content.
- Interoperability: The solutions should work seamlessly across platforms.
- Robustness: The solutions must maintain effectiveness over time.
Deployer Obligations
Articles 50(3) and 50(4) define the following obligations for deployers:
- Disclose that any content generated or manipulated through AI systems resulting in deep fakes has been artificially created, except where legally authorized for law enforcement purposes.
- For AI-generated text intended for public distribution, disclose that the content has been artificially created unless the AI content has undergone human review and editorial control.
- Inform users about the operation of emotion recognition or biometric categorization systems, ensuring compliance with the General Data Protection Regulation (GDPR).
Timing and Format of Notices
Information regarding limited-risk AI systems must be provided at the first interaction with the user. Special attention must be given to vulnerable groups, ensuring that information is accessible to individuals with disabilities.
Role of the European Commission
The European Commission will review the list of limited-risk AI systems every four years and facilitate the creation of codes of practice for compliance with detection and labeling obligations.
Interplay with GDPR and Digital Services Act (DSA)
The transparency obligations imposed by the AI Act coexist with GDPR requirements, particularly regarding data collection purposes. Providers of very large online platforms must identify and mitigate systemic risks associated with AI-generated content in compliance with the DSA.
Enforcement and Penalties
National authorities are responsible for ensuring compliance with the AI Act’s transparency requirements. Non-compliance can lead to administrative fines of up to €15 million or 3% of the operator’s total global annual turnover, whichever is higher.
Timeline for Implementation
The transparency requirements for limited-risk AI systems under the AI Act will apply starting from August 2, 2026.