Draft Guidelines Illuminate AI System Definition Under EU AI Act

Draft Guidelines on AI System Definition Under the EU AI Act

The European Commission has recently published crucial guidelines concerning the definition of AI systems. These guidelines aim to clarify the application of the definition provided in Article 3(1) of the AI Act.

Understanding the AI System Definition

According to the Commission, an AI system is defined as:
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition underscores the complexity and adaptability of AI technologies, emphasizing their potential to generate predictions and recommendations that can significantly affect various environments.

Purpose of the Guidelines

The primary goal of the guidelines is to assist organizations in determining whether their software systems qualify as AI systems. This clarity is essential for ensuring the effective application of the AI rules across the European Union.

Non-Binding Nature and Evolution

It is important to note that these guidelines are not binding. They are intended to evolve over time, adapting to practical experiences and emerging questions or use cases. This flexibility allows for a responsive regulatory framework as the landscape of AI continues to develop.

Complementary Guidelines

In conjunction with the definition guidelines, the Commission has also released the Guidelines on Prohibited Artificial Intelligence (AI) Practices. These documents outline specific practices deemed unacceptable under the AI Act, reinforcing the European Commission’s commitment to ethical AI development.

Risk Classification of AI Systems

The AI Act classifies AI systems into various risk categories, including:

  • Prohibited AI systems
  • High-risk AI systems
  • Those subject to transparency obligations

As of February 2, the first rules under the AI Act have come into effect, including the AI system definition, AI literacy requirements, and guidelines on prohibited AI use cases that pose unacceptable risks within the EU.

Conclusion

The draft guidelines from the European Commission represent a significant step towards the responsible governance of AI technologies. By providing clarity on what constitutes an AI system and outlining permissible and prohibited practices, these guidelines aim to foster innovation while ensuring the protection of health, safety, and fundamental rights.

As the guidelines are still in draft form, the Commission has yet to announce when they will be finalized, leaving room for further refinement and adaptation in line with ongoing developments in the AI sector.

More Insights

EU’s Struggle for Teen AI Safety Amid Corporate Promises

OpenAI and Meta have introduced new parental controls and safety measures for their AI chatbots to protect teens from mental health risks, responding to concerns raised by incidents involving AI...

EU AI Act: Transforming Global AI Standards

The EU AI Act introduces a risk-based regulatory framework for artificial intelligence, categorizing systems by their potential harm and imposing strict compliance requirements on high-risk...

Empowering Government Innovation with AI Sandboxes

In 2023, California launched a generative artificial intelligence sandbox, allowing state employees to experiment with AI integration in public sector operations. This initiative has been recognized...

Global Trust in Generative AI Rises Amid AI Governance Gaps

A recent study by SAS reveals that trust in generative AI is higher than in traditional AI, with nearly half of respondents expressing complete trust in GenAI. However, only 40% of organizations are...

Kazakhstan’s Digital Revolution: Embracing AI and Crypto Transformation

Kazakhstan is undergoing a significant transformation by prioritizing artificial intelligence and digitalization as part of its national strategy, aiming to shift away from its reliance on raw...

California’s Pioneering AI Safety and Transparency Legislation

California has enacted the nation's first comprehensive AI Safety and Transparency Act, signed into law by Governor Gavin Newsom on September 29, 2025. This landmark legislation aims to establish a...

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...

Pakistan’s Ambitious National AI Policy: A Path to Innovation and Job Creation

Pakistan has introduced an ambitious National AI Policy aimed at building a $2.7 billion domestic AI market in five years, focusing on innovation, skills, ethical use, and international collaboration...

Implementing Ethical AI Governance for Long-Term Success

This practical guide emphasizes the critical need for ethical governance in AI deployment, detailing actionable steps for organizations to manage ethical risks and integrate ethical principles into...