Defining AI Systems: Insights from the European Commission Guidelines

European Commission Guidelines on the Definition of an “AI System”

In February 2025, the European Commission published two crucial sets of guidelines aimed at clarifying key aspects of the EU Artificial Intelligence Act (“AI Act”). These guidelines, namely the Guidelines on the definition of an AI system and the Guidelines on prohibited AI practices, are intended to provide essential guidance on obligations that began to apply on February 2, 2025. This includes definitions, obligations regarding AI literacy, and prohibitions on certain AI practices.

This article summarizes the key takeaways from the Commission’s guidelines on the definition of AI systems, identifying critical components and examples that elucidate these definitions.

Defining an “AI System” Under the AI Act

The AI Act (Article 3(1)) defines an “AI system” as:

  1. A machine-based system;
  2. Designed to operate with varying levels of autonomy;
  3. May exhibit adaptiveness after deployment;
  4. Infers, from the input it receives, how to generate outputs;
  5. Outputs can include predictions, content, recommendations, or decisions;
  6. Can influence physical or virtual environments.

The Guidelines provide explanatory guidance on each of these seven elements, which are essential for understanding what constitutes an AI system.

Key Takeaways from the Guidelines

  • Machine-based: The term refers to systems developed with and run on machines, encompassing a wide variety of computational systems, including emerging quantum computing systems. Notably, biological or organic systems can also qualify as machine-based if they provide computational capacity.
  • Autonomy: The concept of varying autonomy indicates a system’s ability to operate independently of human involvement. Systems designed to operate solely with manual human control are excluded from the AI system definition. For example, a system requiring manual inputs to generate an output without explicit human control qualifies as an AI system.
  • Adaptiveness: This element pertains to a system’s self-learning capabilities, allowing its behavior to change while in use. While adaptiveness after deployment is not a necessary condition for classification as an AI system, it remains a significant characteristic.
  • Objectives: Objectives are the explicit or implicit goals of the tasks performed by the AI system. The Guidelines differentiate between a system’s internal objectives and its external intended purpose, which relates to the context of deployment. For instance, a corporate AI assistant’s intended purpose is to assist a department, achieved through the system’s internal objectives.
  • Inferencing and AI techniques: The ability to infer from received inputs how to generate outputs is a key condition of AI systems. This encompasses various AI techniques such as supervised learning, unsupervised learning, and reinforcement learning, which enable inferencing during the system’s development phase.
  • Outputs: Outputs from an AI system can be categorized into four types: predictions (estimations about unknown values), content (newly generated material), recommendations (suggestions for actions or products), and decisions (conclusions made by the AI).
  • Interaction with the environment: An AI system is characterized by its active impact on its deployment environment, whether physical or virtual, rather than being passive.

Exclusions from the Definition

The Guidelines also specify exclusions from the AI system definition, highlighting that simpler traditional software systems or programming approaches, which rely solely on rules defined by humans to execute operations, do not qualify. Examples include systems for mathematical optimization and basic data processing, as they lack the capacity to analyze patterns and adjust outputs autonomously.

In conclusion, the European Commission’s guidelines provide a comprehensive framework for understanding AI systems within the context of the AI Act. As regulatory developments continue to unfold, these definitions will be vital in guiding compliance and shaping the future of AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...