Defining AI Systems: Insights from the European Commission’s Guidelines

Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established. On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy, and a limited number of prohibited AI practices. In line with article 96 of the AI Act, detailed guidelines were released on February 6, 2025, to clarify the application of the definition of an AI system.

These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers, and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.

“AI System” Definition Elements

Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.

Machine-based System

The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory, and interfaces) and software (e.g., code, algorithms, and models) components. This includes not only traditional digital systems but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.

Autonomy

Another essential requirement is autonomy, described as a system’s capacity to function with some degree of independence from human control. This does not necessarily imply full automation but may include systems capable of operating based on indirect human input or supervision. Systems designed to operate solely with full manual human involvement and intervention are excluded from this definition.

Adaptiveness

An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional, and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.

Systems Objectives

AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derived from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.

Inferencing Capabilities

It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions, or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.

Output That Can Influence Physical or Virtual Environments

The output of an AI system (predictions, content, recommendations, or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.

Environmental Interaction

Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.

Systems Excluded from the AI System Definition

In addition to the wide explanation of AI systems elements of definition, these guidelines provide clarity on what is not considered AI under the AI Act, even if some systems show rudimentary inferencing traits:

  • Systems for improving mathematical optimization – such as certain machine learning tools, that are used purely to improve computational performance (e.g., to enhance simulation speeds or bandwidth allocation) fall outside the scope unless they involve intelligent decision-making.
  • Basic data processing tools – Systems that execute pre-defined instructions or calculations (e.g., spreadsheets, dashboards, and databases) without learning, reasoning, or modelling are not considered AI systems.
  • Classical heuristic systems – Rule-based problem-solving systems that do not evolve through data or experience, such as chess programs based solely on minimax algorithms, are also excluded.
  • Simple prediction engines – Tools using basic statistical methods (e.g., average-based predictors) for benchmarking or forecasting, without complex pattern recognition or inference, do not meet the definition’s threshold.

The European Commission concludes by highlighting the following aspects:

  • It must be noted that the definition of an AI system in the AI Act is broad and must be assessed based on how each system works in practice.
  • There is not an exhaustive list of what is considered AI; each case depends on the system’s features.
  • Not all AI systems are subject to regulatory obligations and oversight under the AI Act.
  • Only those that present higher risks, such as those covered by the rules on prohibited or high-risk AI, will be under legal obligations.

These guidelines play an important role in supporting the effective implementation of the AI Act. By clarifying what is meant by an AI system, they provide greater legal certainty and help all relevant stakeholders such as regulators, providers, and users understand how the rules apply in practice. Their functional and flexible approach reflects the diversity of AI technologies and offers a practical basis for distinguishing AI systems from traditional software. As such, the guidelines contribute to a more consistent and reliable application of the regulation across the EU.

More Insights

Microsoft Embraces EU AI Code While Meta Withdraws

Microsoft is expected to sign the European Union's code of practice for artificial intelligence, while Meta Platforms has declined to do so, citing legal uncertainties. The code aims to ensure...

Colorado’s Groundbreaking AI Law Sets New Compliance Standards

Analysts note that Colorado's upcoming AI law, which takes effect on February 1, 2026, is notable for its comprehensive requirements, mandating businesses to adopt risk management programs for...

Strengthening Ethical AI: Malaysia’s Action Plan for 2026-2030

Malaysia's upcoming AI Technology Action Plan 2026–2030 aims to enhance ethical safeguards and governance frameworks for artificial intelligence, as announced by Digital Minister Gobind Singh Deo. The...

Simultaneous Strategies for AI Governance

The development of responsible Artificial Intelligence (AI) policies and overall AI strategies must occur simultaneously to ensure alignment with intended purposes and core values. Bhutan's unique...

Guidelines for AI Models with Systemic Risks Under EU Regulations

The European Commission has issued guidelines to assist AI models deemed to have systemic risks in complying with the EU's AI Act, which will take effect on August 2. These guidelines aim to clarify...

Kerala: Pioneering Ethical AI in Education and Public Services

Kerala is emerging as a global leader in ethical AI, particularly in education and public services, by implementing a multi-pronged strategy that emphasizes government vision, academic rigor, and...

States Lead the Charge in AI Regulation

States across the U.S. are rapidly enacting their own AI regulations following the removal of a federal prohibition, leading to a fragmented landscape of laws that businesses must navigate. Key states...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...

AI Compliance: Harnessing Benefits While Mitigating Risks

AI is transforming compliance functions, enhancing detection capabilities and automating tasks, but also poses significant risks that organizations must manage. To deploy AI responsibly, compliance...