EU Commission’s New Guidelines on AI Systems Defined

EU Commission Clarifies Definition of AI Systems

The European Commission has recently published guidelines that provide a detailed clarification of the definition of AI systems under the AI Act. These guidelines analyze each component of the definition, provide relevant examples, and specify which systems may fall outside this definition. Although non-binding, this guidance serves as a valuable resource for organizations to determine their compliance with the AI Act.

Key Components of the AI Act’s Definition of AI Systems

Article 3(1) of the AI Act defines an AI system as follows:

“AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The EU Commission identifies seven critical components of this definition:

1. AI Systems Are Machine-Based Systems

The guidelines clarify that the term “machine-based” refers to systems integrating both hardware and software components that enable functionality. These systems must be computationally driven and based on machine operations.

2. AI Systems Must Have Some Degree of Autonomy

According to Recital 12 of the AI Act, AI systems should exhibit independence from human involvement. This excludes systems that operate solely under full manual human intervention, whether direct or indirect. The ability to infer outputs is crucial for achieving autonomy.

3. AI Systems May Adapt After Deployment

The definition indicates that AI systems may show adaptiveness post-deployment. This refers to self-learning capabilities that allow a system’s behavior to evolve while in use. However, adaptiveness is not a mandatory requirement for a system to be classified as an AI system under the AI Act.

4. AI Systems Are Designed to Operate According to Objectives

The guidelines state that the objectives of an AI system can be explicit (clearly defined by the developer) or implicit (deduced from the system’s behavior). These objectives are internal to the system, distinguishing them from the intended purpose, which is externally oriented.

5. AI Systems Must Be Capable of Inferring Outputs

This component is pivotal in distinguishing AI systems from simpler software. The definition aims to differentiate AI from traditional programming approaches. The guidelines clarify how to assess a system’s capacity to infer outputs, providing examples of systems that do not qualify as AI.

Systems that do not infer outputs or have limited pattern analysis capabilities are not classified as AI under the AI Act. Examples of non-AI systems include:

  • Systems improving mathematical optimization, like programs optimizing bandwidth allocation.
  • Basic data processing tools like database management programs.
  • Classical heuristics, such as chess programs evaluating board positions without learning.
  • Simple prediction systems that use basic statistical methods for forecasting.

6. AI System’s Outputs Must Influence Environments

The EU Commission describes outputs of AI systems, such as predictions, content creation, recommendations, and decisions, emphasizing that these outputs can be more nuanced than those from traditional systems.

7. AI Systems Must Interact with the Environment

This definition highlights that AI systems are active participants in their environments, capable of influencing both physical objects and virtual settings.

Conclusion: Next Steps for Organizations

Organizations must assess whether and how the AI Act applies to their products and operations. This evaluation should align with the definition of AI systems as outlined in the guidelines, particularly regarding the inference capacity component. It is recommended that both legal and technical teams collaborate on this assessment to ensure compliance.

More Insights

Bridging the Gap: Enhancing Collaboration Between Privacy and IT in AI Adoption

At IAPP’s Global Privacy Summit, industry leaders emphasized the urgent need for collaboration between privacy professionals and IT teams as AI adoption accelerates. They highlighted that effective AI...

DeepSeek: AI’s Role in Shaping China’s Social Governance

DeepSeek, a new large language model in China, is being utilized to enhance social governance by providing information and guidance aligned with state policies. As it becomes integrated into various...

Guidelines for Prohibited AI Practices Under the EU AI Act

The EU Commission has published guidelines outlining prohibited AI practices under the AI Act, which aim to protect fundamental rights while fostering innovation. These guidelines clarify unacceptable...

North Carolina Appoints First AI Governance Leader for Ethical Innovation

North Carolina has appointed I-Sah Hsieh as its first AI governance and policy leader to drive ethical advancements in technology. With over 25 years of experience, Hsieh will guide the state's...

AI Governance: Balancing Innovation and Risk Management

In an exclusive interview, Dr. Enzo Tolentino discusses the dual nature of artificial intelligence as both a game-changer and a risk amplifier, emphasizing the importance of addressing risks like...

Unchecked AI: The Hidden Dangers of Internal Deployments

The report from Apollo Research warns that unchecked internal deployment of AI systems by major firms like Google and OpenAI could lead to catastrophic risks, including AI systems operating beyond...

AI Governance: Bridging Global Divides at Shanghai Forum 2025

At the Shanghai Forum 2025, global experts discussed the governance challenges posed by the rapid advancement of artificial intelligence. They emphasized the importance of cooperation and building...

Empowering Malaysia’s Future Through AI Governance

Artificial intelligence (AI) is transforming industries worldwide, and Malaysia is positioning itself as a regional hub for AI development through initiatives like the National AI Office. However, to...

Universities at the Crossroads of AI Policy

Artificial intelligence has emerged as a significant geopolitical issue, placing universities at the forefront of navigating complex national AI policies. As these institutions adapt to fragmented...