EU Commission’s New Guidelines on AI Systems Defined

EU Commission Clarifies Definition of AI Systems

The European Commission has recently published guidelines that provide a detailed clarification of the definition of AI systems under the AI Act. These guidelines analyze each component of the definition, provide relevant examples, and specify which systems may fall outside this definition. Although non-binding, this guidance serves as a valuable resource for organizations to determine their compliance with the AI Act.

Key Components of the AI Act’s Definition of AI Systems

Article 3(1) of the AI Act defines an AI system as follows:

“AI system means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The EU Commission identifies seven critical components of this definition:

1. AI Systems Are Machine-Based Systems

The guidelines clarify that the term “machine-based” refers to systems integrating both hardware and software components that enable functionality. These systems must be computationally driven and based on machine operations.

2. AI Systems Must Have Some Degree of Autonomy

According to Recital 12 of the AI Act, AI systems should exhibit independence from human involvement. This excludes systems that operate solely under full manual human intervention, whether direct or indirect. The ability to infer outputs is crucial for achieving autonomy.

3. AI Systems May Adapt After Deployment

The definition indicates that AI systems may show adaptiveness post-deployment. This refers to self-learning capabilities that allow a system’s behavior to evolve while in use. However, adaptiveness is not a mandatory requirement for a system to be classified as an AI system under the AI Act.

4. AI Systems Are Designed to Operate According to Objectives

The guidelines state that the objectives of an AI system can be explicit (clearly defined by the developer) or implicit (deduced from the system’s behavior). These objectives are internal to the system, distinguishing them from the intended purpose, which is externally oriented.

5. AI Systems Must Be Capable of Inferring Outputs

This component is pivotal in distinguishing AI systems from simpler software. The definition aims to differentiate AI from traditional programming approaches. The guidelines clarify how to assess a system’s capacity to infer outputs, providing examples of systems that do not qualify as AI.

Systems that do not infer outputs or have limited pattern analysis capabilities are not classified as AI under the AI Act. Examples of non-AI systems include:

  • Systems improving mathematical optimization, like programs optimizing bandwidth allocation.
  • Basic data processing tools like database management programs.
  • Classical heuristics, such as chess programs evaluating board positions without learning.
  • Simple prediction systems that use basic statistical methods for forecasting.

6. AI System’s Outputs Must Influence Environments

The EU Commission describes outputs of AI systems, such as predictions, content creation, recommendations, and decisions, emphasizing that these outputs can be more nuanced than those from traditional systems.

7. AI Systems Must Interact with the Environment

This definition highlights that AI systems are active participants in their environments, capable of influencing both physical objects and virtual settings.

Conclusion: Next Steps for Organizations

Organizations must assess whether and how the AI Act applies to their products and operations. This evaluation should align with the definition of AI systems as outlined in the guidelines, particularly regarding the inference capacity component. It is recommended that both legal and technical teams collaborate on this assessment to ensure compliance.

More Insights

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

Harnessing AI for Effective Risk Management

Artificial intelligence is becoming essential for the risk function, helping chief risk officers (CROs) to navigate compliance and data governance challenges. With a growing number of organizations...

Senate Reverses Course on AI Regulation Moratorium

In a surprising turn, the U.S. Senate voted overwhelmingly to eliminate a provision that would have imposed a federal moratorium on state regulations of artificial intelligence for the next decade...

Bridging the 83% Compliance Gap in Pharmaceutical AI Security

The pharmaceutical industry is facing a significant compliance gap regarding AI data security, with only 17% of companies implementing automated controls to protect sensitive information. This lack of...

Transforming Corporate Governance: The Impact of the EU AI Act

This research project investigates how the EU Artificial Intelligence Act is transforming corporate governance and accountability frameworks, compelling companies to reconfigure responsibilities and...

AI-Driven Cybersecurity: Bridging the Accountability Gap

As organizations increasingly adopt AI to drive innovation, they face a dual challenge: while AI enhances cybersecurity measures, it simultaneously facilitates more sophisticated cyberattacks. The...

Thailand’s Comprehensive AI Governance Strategy

Thailand is drafting principles for artificial intelligence (AI) legislation aimed at establishing an AI ecosystem and enhancing user protection from potential risks. The legislation will remove legal...

Texas Implements Groundbreaking AI Regulations in Healthcare

Texas has enacted comprehensive AI governance laws, including the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) and Senate Bill 1188, which establish a framework for responsible AI...

AI Governance: Balancing Innovation and Oversight

Riskonnect has launched its new AI Governance solution, enabling organizations to manage the risks and compliance obligations of AI technologies while fostering innovation. The solution integrates...