Draft Guidelines Illuminate AI System Definition Under EU AI Act

Draft Guidelines on AI System Definition Under the EU AI Act

The European Commission has recently published crucial guidelines concerning the definition of AI systems. These guidelines aim to clarify the application of the definition provided in Article 3(1) of the AI Act.

Understanding the AI System Definition

According to the Commission, an AI system is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition underscores the complexity and adaptability of AI technologies, emphasizing their potential to generate predictions and recommendations that can significantly affect various environments.

Purpose of the Guidelines

The primary goal of the guidelines is to assist organizations in determining whether their software systems qualify as AI systems. This clarity is essential for ensuring the effective application of the AI rules across the European Union.

Non-Binding Nature and Evolution

It is important to note that these guidelines are not binding. They are intended to evolve over time, adapting to practical experiences and emerging questions or use cases. This flexibility allows for a responsive regulatory framework as the landscape of AI continues to develop.

Complementary Guidelines

In conjunction with the definition guidelines, the Commission has also released the Guidelines on Prohibited Artificial Intelligence (AI) Practices. These documents outline specific practices deemed unacceptable under the AI Act, reinforcing the European Commission’s commitment to ethical AI development.

Risk Classification of AI Systems

The AI Act classifies AI systems into various risk categories, including:

  • Prohibited AI systems
  • High-risk AI systems
  • Those subject to transparency obligations

As of February 2, the first rules under the AI Act have come into effect, including the AI system definition, AI literacy requirements, and guidelines on prohibited AI use cases that pose unacceptable risks within the EU.

Conclusion

The draft guidelines from the European Commission represent a significant step towards the responsible governance of AI technologies. By providing clarity on what constitutes an AI system and outlining permissible and prohibited practices, these guidelines aim to foster innovation while ensuring the protection of health, safety, and fundamental rights.

As the guidelines are still in draft form, the Commission has yet to announce when they will be finalized, leaving room for further refinement and adaptation in line with ongoing developments in the AI sector.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...