Defining AI Systems: Insights from the European Commission’s Guidelines

Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established. On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy, and a limited number of prohibited AI practices. In line with article 96 of the AI Act, detailed guidelines were released on February 6, 2025, to clarify the application of the definition of an AI system.

These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers, and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.

“AI System” Definition Elements

Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.

Machine-based System

The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory, and interfaces) and software (e.g., code, algorithms, and models) components. This includes not only traditional digital systems but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.

Autonomy

Another essential requirement is autonomy, described as a system’s capacity to function with some degree of independence from human control. This does not necessarily imply full automation but may include systems capable of operating based on indirect human input or supervision. Systems designed to operate solely with full manual human involvement and intervention are excluded from this definition.

Adaptiveness

An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional, and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.

Systems Objectives

AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derived from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.

Inferencing Capabilities

It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions, or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.

Output That Can Influence Physical or Virtual Environments

The output of an AI system (predictions, content, recommendations, or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.

Environmental Interaction

Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.

Systems Excluded from the AI System Definition

In addition to the wide explanation of AI systems elements of definition, these guidelines provide clarity on what is not considered AI under the AI Act, even if some systems show rudimentary inferencing traits:

  • Systems for improving mathematical optimization – such as certain machine learning tools, that are used purely to improve computational performance (e.g., to enhance simulation speeds or bandwidth allocation) fall outside the scope unless they involve intelligent decision-making.
  • Basic data processing tools – Systems that execute pre-defined instructions or calculations (e.g., spreadsheets, dashboards, and databases) without learning, reasoning, or modelling are not considered AI systems.
  • Classical heuristic systems – Rule-based problem-solving systems that do not evolve through data or experience, such as chess programs based solely on minimax algorithms, are also excluded.
  • Simple prediction engines – Tools using basic statistical methods (e.g., average-based predictors) for benchmarking or forecasting, without complex pattern recognition or inference, do not meet the definition’s threshold.

The European Commission concludes by highlighting the following aspects:

  • It must be noted that the definition of an AI system in the AI Act is broad and must be assessed based on how each system works in practice.
  • There is not an exhaustive list of what is considered AI; each case depends on the system’s features.
  • Not all AI systems are subject to regulatory obligations and oversight under the AI Act.
  • Only those that present higher risks, such as those covered by the rules on prohibited or high-risk AI, will be under legal obligations.

These guidelines play an important role in supporting the effective implementation of the AI Act. By clarifying what is meant by an AI system, they provide greater legal certainty and help all relevant stakeholders such as regulators, providers, and users understand how the rules apply in practice. Their functional and flexible approach reflects the diversity of AI technologies and offers a practical basis for distinguishing AI systems from traditional software. As such, the guidelines contribute to a more consistent and reliable application of the regulation across the EU.

More Insights

Balancing Innovation and Ethics in AI Engineering

Artificial Intelligence has rapidly advanced, placing AI engineers at the forefront of innovation as they design and deploy intelligent systems. However, with this power comes the responsibility to...

Harnessing the Power of Responsible AI

Responsible AI is described by Dr. Anna Zeiter as a fundamental imperative rather than just a buzzword, emphasizing the need for ethical frameworks as AI reshapes the world. She highlights the...

Integrating AI: A Compliance-Driven Approach for Businesses

The Cloud Security Alliance (CSA) highlights that many AI adoption efforts fail because companies attempt to integrate AI into outdated processes that lack the necessary transparency and adaptability...

Preserving Generative AI Outputs: Legal Considerations and Best Practices

Generative artificial intelligence (GAI) tools raise legal concerns regarding data privacy, security, and the preservation of prompts and outputs for litigation. Organizations must develop information...

Embracing Responsible AI: Principles and Practices for a Fair Future

Responsible AI refers to the creation and use of artificial intelligence systems that are fair, transparent, and accountable. It emphasizes the importance of ethical considerations in AI development...

Building Trustworthy AI for Sustainable Business Growth

As businesses increasingly rely on artificial intelligence (AI) for critical decision-making, the importance of building trust and governance around these technologies becomes paramount. Organizations...

Spain’s Trailblazing AI Regulatory Framework

Spain is leading in AI governance by establishing Europe’s first AI regulator, AESIA, and implementing a draft national AI law that aligns with the EU AI Act. The country is also creating a regulatory...

Global AI Regulation: Trends and Challenges

This document discusses the current state of AI regulation in Israel, highlighting the absence of specific laws directly regulating AI. It also outlines the government's efforts to promote responsible...

AI and Regulatory Challenges in the Gambling Industry

The article discusses the integration of Artificial Intelligence (AI) in the gambling industry, emphasizing the balance between technological advancements and regulatory compliance. It highlights the...