EU Commission’s New Guidelines on AI Systems Defined

EU Commission Clarifies Definition of AI Systems

The European Commission has recently published guidelines that provide a detailed clarification of the definition of AI systems under the AI Act. These guidelines analyze each component of the definition, provide relevant examples, and specify which systems may fall outside this definition. Although non-binding, this guidance serves as a valuable resource for organizations to determine their compliance with the AI Act.

Key Components of the AI Act’s Definition of AI Systems

Article 3(1) of the AI Act defines an AI system as follows:

“AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The EU Commission identifies seven critical components of this definition:

1. AI Systems Are Machine-Based Systems

The guidelines clarify that the term “machine-based” refers to systems integrating both hardware and software components that enable functionality. These systems must be computationally driven and based on machine operations.

2. AI Systems Must Have Some Degree of Autonomy

According to Recital 12 of the AI Act, AI systems should exhibit independence from human involvement. This excludes systems that operate solely under full manual human intervention, whether direct or indirect. The ability to infer outputs is crucial for achieving autonomy.

3. AI Systems May Adapt After Deployment

The definition indicates that AI systems may show adaptiveness post-deployment. This refers to self-learning capabilities that allow a system’s behavior to evolve while in use. However, adaptiveness is not a mandatory requirement for a system to be classified as an AI system under the AI Act.

4. AI Systems Are Designed to Operate According to Objectives

The guidelines state that the objectives of an AI system can be explicit (clearly defined by the developer) or implicit (deduced from the system’s behavior). These objectives are internal to the system, distinguishing them from the intended purpose, which is externally oriented.

5. AI Systems Must Be Capable of Inferring Outputs

This component is pivotal in distinguishing AI systems from simpler software. The definition aims to differentiate AI from traditional programming approaches. The guidelines clarify how to assess a system’s capacity to infer outputs, providing examples of systems that do not qualify as AI.

Systems that do not infer outputs or have limited pattern analysis capabilities are not classified as AI under the AI Act. Examples of non-AI systems include:

  • Systems improving mathematical optimization, like programs optimizing bandwidth allocation.
  • Basic data processing tools like database management programs.
  • Classical heuristics, such as chess programs evaluating board positions without learning.
  • Simple prediction systems that use basic statistical methods for forecasting.

6. AI System’s Outputs Must Influence Environments

The EU Commission describes outputs of AI systems, such as predictions, content creation, recommendations, and decisions, emphasizing that these outputs can be more nuanced than those from traditional systems.

7. AI Systems Must Interact with the Environment

This definition highlights that AI systems are active participants in their environments, capable of influencing both physical objects and virtual settings.

Conclusion: Next Steps for Organizations

Organizations must assess whether and how the AI Act applies to their products and operations. This evaluation should align with the definition of AI systems as outlined in the guidelines, particularly regarding the inference capacity component. It is recommended that both legal and technical teams collaborate on this assessment to ensure compliance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...