Draft Guidelines Illuminate AI System Definition Under EU AI Act

Draft Guidelines on AI System Definition Under the EU AI Act

The European Commission has recently published crucial guidelines concerning the definition of AI systems. These guidelines aim to clarify the application of the definition provided in Article 3(1) of the AI Act.

Understanding the AI System Definition

According to the Commission, an AI system is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition underscores the complexity and adaptability of AI technologies, emphasizing their potential to generate predictions and recommendations that can significantly affect various environments.

Purpose of the Guidelines

The primary goal of the guidelines is to assist organizations in determining whether their software systems qualify as AI systems. This clarity is essential for ensuring the effective application of the AI rules across the European Union.

Non-Binding Nature and Evolution

It is important to note that these guidelines are not binding. They are intended to evolve over time, adapting to practical experiences and emerging questions or use cases. This flexibility allows for a responsive regulatory framework as the landscape of AI continues to develop.

Complementary Guidelines

In conjunction with the definition guidelines, the Commission has also released the Guidelines on Prohibited Artificial Intelligence (AI) Practices. These documents outline specific practices deemed unacceptable under the AI Act, reinforcing the European Commission’s commitment to ethical AI development.

Risk Classification of AI Systems

The AI Act classifies AI systems into various risk categories, including:

  • Prohibited AI systems
  • High-risk AI systems
  • Those subject to transparency obligations

As of February 2, the first rules under the AI Act have come into effect, including the AI system definition, AI literacy requirements, and guidelines on prohibited AI use cases that pose unacceptable risks within the EU.

Conclusion

The draft guidelines from the European Commission represent a significant step towards the responsible governance of AI technologies. By providing clarity on what constitutes an AI system and outlining permissible and prohibited practices, these guidelines aim to foster innovation while ensuring the protection of health, safety, and fundamental rights.

As the guidelines are still in draft form, the Commission has yet to announce when they will be finalized, leaving room for further refinement and adaptation in line with ongoing developments in the AI sector.

More Insights

Effective AI Governance: Balancing Innovation and Risk in Enterprises

The Tech Monitor webinar examined the essential components of AI governance for enterprises, particularly within the financial services sector. It discussed the balance between harnessing AI's...

States Take Charge: The Future of AI Regulation

The current regulatory landscape for AI is characterized by significant uncertainty and varying state-level initiatives, following the revocation of federal regulations. As enterprises navigate this...

EU AI Act: Redefining Compliance and Trust in AI Business

The EU AI Act is set to fundamentally transform the development and deployment of artificial intelligence across Europe, establishing the first comprehensive legal framework for the industry...

Finalizing the General-Purpose AI Code of Practice: Key Takeaways

On July 10, 2025, the European Commission released a nearly final version of the General-Purpose AI Code of Practice, which serves as a voluntary compliance mechanism leading up to the implementation...

Chinese AI Official Advocates for Collaborative Governance to Bridge Development Gaps

An AI official from China emphasized the need for a collaborative and multi-governance ecosystem to promote AI as a public good and bridge the development gap. This call for cooperation highlights the...

Mastering Risk Management in the EU AI Act

The EU AI Act introduces a comprehensive regulation for high-risk AI systems, emphasizing a mandatory Risk Management System (RMS) to proactively manage risks throughout the AI lifecycle. This...

Switzerland’s Approach to AI Regulation: A 2025 Update

Switzerland's National AI Strategy aims to finalize an AI regulatory proposal by 2025, while currently, AI is subject to the Swiss legal framework without specific regulations in place. The Federal...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...

Mastering AI Compliance Under the EU AI Act

As AI systems become integral to various industries, the EU AI Act introduces a comprehensive regulatory framework with stringent obligations based on four defined risk tiers. This guide explores AI...