European Commission Guidelines on the Definition of an “AI System”
In February 2025, the European Commission published two sets of guidelines aimed at clarifying essential aspects of the EU Artificial Intelligence Act (“AI Act”). These guidelines encompass the definition of an AI system and prohibited AI practices. The intent behind these guidelines is to provide clarity on the obligations of the AI Act, which came into effect on February 2, 2025. This includes sections on definitions, AI literacy requirements, and prohibitions on certain AI practices.
Defining an “AI System” Under the AI Act
The AI Act (Article 3(1)) defines an “AI system” as:
- Machine-based: A system that operates on machines, which can range from traditional computational systems to emerging quantum computing technologies. Interestingly, even biological or organic systems may qualify if they provide computational capacity.
- Varying Levels of Autonomy: This refers to the system’s capability to function independently of human involvement. Systems designed to operate solely with full manual human intervention fall outside the AI system definition.
- Adaptiveness: The ability of a system to exhibit self-learning capabilities and alter its behavior after deployment. However, it is crucial to note that adaptiveness is not a strict requirement for an AI system.
- Objectives: These are the explicit or implicit goals of the AI system, which may differ from its intended purpose, dependent on its context of use.
- Inferencing and AI Techniques: The capability to infer outputs based on inputs is deemed a vital condition for AI systems. Various AI techniques, including supervised and reinforcement learning, facilitate this inferencing process.
- Outputs: Outputs can be categorized into four main types: predictions, content, recommendations, and decisions.
- Interaction with the Environment: An AI system is characterized by its active engagement with its environment, making an impact rather than remaining passive.
The guidelines also clarify that simpler traditional software systems or those based solely on rules defined by humans do not qualify as AI systems. Examples include basic data processing systems and classical heuristics, which, despite their capacity to infer, lack the advanced analytical capabilities needed to meet the AI definition.
Key Takeaways from the Guidelines
Some significant points from the guidelines include:
- Machine-based systems are vital as they encompass all forms of computation, including cutting-edge technologies.
- Autonomy is essential for defining an AI system, with clear distinctions made between systems requiring human input and those operating independently.
- Adaptiveness is highlighted as an asset, but not a mandatory feature, allowing for flexibility in system classification.
- Objectives encompass both internal goals and external purposes, illustrating the multifaceted nature of AI deployment.
- The definition stresses the importance of inferencing capabilities, which are foundational to the operation of AI systems.
- Outputs are categorized to provide a clearer understanding of what constitutes effective AI behavior.
- Interaction with the environment marks a significant difference between AI systems and traditional software, emphasizing the active role of AI in shaping outcomes.
As the regulatory landscape for AI continues to evolve, these guidelines serve as a foundational reference for understanding what constitutes an AI system under the EU’s legal framework. The ongoing monitoring of regulatory developments is crucial for stakeholders in the tech industry as they navigate compliance and innovation in this rapidly changing field.