Defining AI Systems: Insights from the European Commission’s Guidelines

Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established. On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy, and a limited number of prohibited AI practices. In line with article 96 of the AI Act, detailed guidelines were released on February 6, 2025, to clarify the application of the definition of an AI system.

These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers, and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.

“AI System” Definition Elements

Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.

Machine-based System

The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory, and interfaces) and software (e.g., code, algorithms, and models) components. This includes not only traditional digital systems but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.

Autonomy

Another essential requirement is autonomy, described as a system’s capacity to function with some degree of independence from human control. This does not necessarily imply full automation but may include systems capable of operating based on indirect human input or supervision. Systems designed to operate solely with full manual human involvement and intervention are excluded from this definition.

Adaptiveness

An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional, and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.

Systems Objectives

AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derived from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.

Inferencing Capabilities

It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions, or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.

Output That Can Influence Physical or Virtual Environments

The output of an AI system (predictions, content, recommendations, or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.

Environmental Interaction

Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.

Systems Excluded from the AI System Definition

In addition to the wide explanation of AI systems elements of definition, these guidelines provide clarity on what is not considered AI under the AI Act, even if some systems show rudimentary inferencing traits:

  • Systems for improving mathematical optimization – such as certain machine learning tools, that are used purely to improve computational performance (e.g., to enhance simulation speeds or bandwidth allocation) fall outside the scope unless they involve intelligent decision-making.
  • Basic data processing tools – Systems that execute pre-defined instructions or calculations (e.g., spreadsheets, dashboards, and databases) without learning, reasoning, or modelling are not considered AI systems.
  • Classical heuristic systems – Rule-based problem-solving systems that do not evolve through data or experience, such as chess programs based solely on minimax algorithms, are also excluded.
  • Simple prediction engines – Tools using basic statistical methods (e.g., average-based predictors) for benchmarking or forecasting, without complex pattern recognition or inference, do not meet the definition’s threshold.

The European Commission concludes by highlighting the following aspects:

  • It must be noted that the definition of an AI system in the AI Act is broad and must be assessed based on how each system works in practice.
  • There is not an exhaustive list of what is considered AI; each case depends on the system’s features.
  • Not all AI systems are subject to regulatory obligations and oversight under the AI Act.
  • Only those that present higher risks, such as those covered by the rules on prohibited or high-risk AI, will be under legal obligations.

These guidelines play an important role in supporting the effective implementation of the AI Act. By clarifying what is meant by an AI system, they provide greater legal certainty and help all relevant stakeholders such as regulators, providers, and users understand how the rules apply in practice. Their functional and flexible approach reflects the diversity of AI technologies and offers a practical basis for distinguishing AI systems from traditional software. As such, the guidelines contribute to a more consistent and reliable application of the regulation across the EU.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...