Defining AI Systems: Insights from the European Commission’s Guidelines

Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established. On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy, and a limited number of prohibited AI practices. In line with article 96 of the AI Act, detailed guidelines were released on February 6, 2025, to clarify the application of the definition of an AI system.

These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers, and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.

“AI System” Definition Elements

Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.

Machine-based System

The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory, and interfaces) and software (e.g., code, algorithms, and models) components. This includes not only traditional digital systems but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.

Autonomy

Another essential requirement is autonomy, described as a system’s capacity to function with some degree of independence from human control. This does not necessarily imply full automation but may include systems capable of operating based on indirect human input or supervision. Systems designed to operate solely with full manual human involvement and intervention are excluded from this definition.

Adaptiveness

An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional, and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.

Systems Objectives

AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derived from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.

Inferencing Capabilities

It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions, or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.

Output That Can Influence Physical or Virtual Environments

The output of an AI system (predictions, content, recommendations, or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.

Environmental Interaction

Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.

Systems Excluded from the AI System Definition

In addition to the wide explanation of AI systems elements of definition, these guidelines provide clarity on what is not considered AI under the AI Act, even if some systems show rudimentary inferencing traits:

  • Systems for improving mathematical optimization – such as certain machine learning tools, that are used purely to improve computational performance (e.g., to enhance simulation speeds or bandwidth allocation) fall outside the scope unless they involve intelligent decision-making.
  • Basic data processing tools – Systems that execute pre-defined instructions or calculations (e.g., spreadsheets, dashboards, and databases) without learning, reasoning, or modelling are not considered AI systems.
  • Classical heuristic systems – Rule-based problem-solving systems that do not evolve through data or experience, such as chess programs based solely on minimax algorithms, are also excluded.
  • Simple prediction engines – Tools using basic statistical methods (e.g., average-based predictors) for benchmarking or forecasting, without complex pattern recognition or inference, do not meet the definition’s threshold.

The European Commission concludes by highlighting the following aspects:

  • It must be noted that the definition of an AI system in the AI Act is broad and must be assessed based on how each system works in practice.
  • There is not an exhaustive list of what is considered AI; each case depends on the system’s features.
  • Not all AI systems are subject to regulatory obligations and oversight under the AI Act.
  • Only those that present higher risks, such as those covered by the rules on prohibited or high-risk AI, will be under legal obligations.

These guidelines play an important role in supporting the effective implementation of the AI Act. By clarifying what is meant by an AI system, they provide greater legal certainty and help all relevant stakeholders such as regulators, providers, and users understand how the rules apply in practice. Their functional and flexible approach reflects the diversity of AI technologies and offers a practical basis for distinguishing AI systems from traditional software. As such, the guidelines contribute to a more consistent and reliable application of the regulation across the EU.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...