Draft Guidelines on AI System Definition Under the EU AI Act
The European Commission has recently published crucial guidelines concerning the definition of AI systems. These guidelines aim to clarify the application of the definition provided in Article 3(1) of the AI Act.
Understanding the AI System Definition
According to the Commission, an AI system is defined as:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
This definition underscores the complexity and adaptability of AI technologies, emphasizing their potential to generate predictions and recommendations that can significantly affect various environments.
Purpose of the Guidelines
The primary goal of the guidelines is to assist organizations in determining whether their software systems qualify as AI systems. This clarity is essential for ensuring the effective application of the AI rules across the European Union.
Non-Binding Nature and Evolution
It is important to note that these guidelines are not binding. They are intended to evolve over time, adapting to practical experiences and emerging questions or use cases. This flexibility allows for a responsive regulatory framework as the landscape of AI continues to develop.
Complementary Guidelines
In conjunction with the definition guidelines, the Commission has also released the Guidelines on Prohibited Artificial Intelligence (AI) Practices. These documents outline specific practices deemed unacceptable under the AI Act, reinforcing the European Commission’s commitment to ethical AI development.
Risk Classification of AI Systems
The AI Act classifies AI systems into various risk categories, including:
- Prohibited AI systems
- High-risk AI systems
- Those subject to transparency obligations
As of February 2, the first rules under the AI Act have come into effect, including the AI system definition, AI literacy requirements, and guidelines on prohibited AI use cases that pose unacceptable risks within the EU.
Conclusion
The draft guidelines from the European Commission represent a significant step towards the responsible governance of AI technologies. By providing clarity on what constitutes an AI system and outlining permissible and prohibited practices, these guidelines aim to foster innovation while ensuring the protection of health, safety, and fundamental rights.
As the guidelines are still in draft form, the Commission has yet to announce when they will be finalized, leaving room for further refinement and adaptation in line with ongoing developments in the AI sector.