European Commission Guidelines on the Definition of an “AI System”
In February 2025, the European Commission published two crucial sets of guidelines aimed at clarifying key aspects of the EU Artificial Intelligence Act (“AI Act”). These guidelines, namely the Guidelines on the definition of an AI system and the Guidelines on prohibited AI practices, are intended to provide essential guidance on obligations that began to apply on February 2, 2025. This includes definitions, obligations regarding AI literacy, and prohibitions on certain AI practices.
This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems, identifying critical components and examples that elucidate these definitions.
Defining an “AI System” Under the AI Act
The AI Act (Article 3(1)) defines an “AI system” as:
- A machine-based system;
- Designed to operate with varying levels of autonomy;
- May exhibit adaptiveness after deployment;
- Infers, from the input it receives, how to generate outputs;
- Outputs can include predictions, content, recommendations, or decisions;
- Can influence physical or virtual environments.
The Guidelines provide explanatory guidance on each of these seven elements, which are essential for understanding what constitutes an AI system.
Key Takeaways from the Guidelines
- Machine-based: The term refers to systems developed with and run on machines, encompassing a wide variety of computational systems, including emerging quantum computing systems. Notably, biological or organic systems can also qualify as machine-based if they provide computational capacity.
- Autonomy: The concept of varying autonomy indicates a system’s ability to operate independently of human involvement. Systems designed to operate solely with manual human control are excluded from the AI system definition. For example, a system requiring manual inputs to generate an output without explicit human control qualifies as an AI system.
- Adaptiveness: This element pertains to a system’s self-learning capabilities, allowing its behavior to change while in use. While adaptiveness after deployment is not a necessary condition for classification as an AI system, it remains a significant characteristic.
- Objectives: Objectives are the explicit or implicit goals of the tasks performed by the AI system. The Guidelines differentiate between a system’s internal objectives and its external intended purpose, which relates to the context of deployment. For instance, a corporate AI assistant’s intended purpose is to assist a department, achieved through the system’s internal objectives.
- Inferencing and AI techniques: The ability to infer from received inputs how to generate outputs is a key condition of AI systems. This encompasses various AI techniques such as supervised learning, unsupervised learning, and reinforcement learning, which enable inferencing during the system’s development phase.
- Outputs: Outputs from an AI system can be categorized into four types: predictions (estimations about unknown values), content (newly generated material), recommendations (suggestions for actions or products), and decisions (conclusions made by the AI).
- Interaction with the environment: An AI system is characterized by its active impact on its deployment environment, whether physical or virtual, rather than being passive.
Exclusions from the Definition
The Guidelines also specify exclusions from the AI system definition, highlighting that simpler traditional software systems or programming approaches, which rely solely on rules defined by humans to execute operations, do not qualify. Examples include systems for mathematical optimization and basic data processing, as they lack the capacity to analyze patterns and adjust outputs autonomously.
In conclusion, the European Commission’s guidelines provide a comprehensive framework for understanding AI systems within the context of the AI Act. As regulatory developments continue to unfold, these definitions will be vital in guiding compliance and shaping the future of AI technologies.