Decoding the EU AI Act: Defining Artificial Intelligence for a New Era

AI Horizons: Understanding the EU AI Act and the Definition of AI

The European Union’s Artificial Intelligence Act (AI Act), which entered into force on August 1, 2024, marks a significant milestone in establishing a cohesive regulatory framework for artificial intelligence across the EU’s 27 member states. While the Act aspires to foster innovation and address ethical, safety, and legal challenges, its implementation has unveiled notable complexities, particularly in the definitional scope of an “AI system” as articulated in Article 3(1). These ambiguities have prompted substantial debate and critical analysis.

Article 3(1): Definition and Critique

Article 3(1) defines an AI system as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

This definition, adapted from the OECD’s November 2023 revision, reflects the ambition to encompass the diversity of AI applications. However, its broad phrasing has been criticized for insufficiently distinguishing AI from traditional IT systems. This lack of specificity raises challenges in consistent application, risks overregulation of non-AI technologies, and may create enforcement difficulties due to interpretive inconsistencies.

The European Law Institute’s ‘Three-Factor Approach’

To address these challenges, the European Law Institute (ELI) has proposed a ‘Three-Factor Approach’ as a more precise framework for delineating AI systems. This model introduces three key evaluative dimensions:

  1. Data or Domain-Specific Knowledge in Development
    This criterion assesses whether the system’s development relied on extensive datasets or specialized domain knowledge, signaling the application of advanced AI methodologies.
  2. Creation of New Know-How During Operation
    This dimension evaluates the system’s capability to dynamically generate new insights or knowledge during its operational phase, indicative of adaptiveness and learning.
  3. Degree of Formal Indeterminacy of Outputs
    This factor considers the unpredictability and variability of the system’s outputs, particularly in contexts traditionally reliant on human discretion, such as diagnostics or creative processes.

An IT system would qualify as an AI system under this framework if it meets at least three positive indicators spanning two or more of these categories. This approach seeks to balance technical neutrality with practical relevance, facilitating a more functional differentiation between AI and non-AI systems.

Implications for Stakeholders

The definitional ambiguity within the AI Act has far-reaching implications for businesses, developers, and policymakers. The absence of a precise framework complicates compliance efforts, particularly for startups and SMEs with limited resources for regulatory navigation. Moreover, overly expansive definitions risk imposing undue burdens on conventional IT solutions, potentially stifling technological progress.

Conversely, the ELI’s nuanced proposal provides a pragmatic pathway for classification, aligning regulatory requirements with the complex realities of AI technologies. By fostering clarity and predictability, this approach can enhance stakeholder confidence, enabling firms to innovate within defined ethical and legal parameters.

Broader Challenges and Strategic Considerations

Beyond definitional issues, the AI Act raises broader challenges that underscore the intricacies of regulating a rapidly evolving technological landscape:

  • Cross-Jurisdictional Harmonization: Ensuring consistency in regulatory enforcement across diverse legal and cultural contexts within the EU remains a formidable task.
  • Technological Dynamism: The accelerated pace of AI innovation, including breakthroughs in generative AI and autonomous systems, necessitates continuous legislative updates to maintain relevance.
  • Balancing Competing Objectives: Striking an equilibrium between fostering innovation and mitigating risks—such as algorithmic bias, data security breaches, and misinformation—is critical. Overregulation could suppress investment, while underregulation may exacerbate societal harm.

Conclusion: Why It Matters

The AI Act signifies the EU’s commitment to cultivating an ethical, transparent, and accountable AI ecosystem. However, its efficacy depends on resolving definitional ambiguities and addressing systemic complexities. For business leaders and policymakers, the imperative is to engage proactively in regulatory dialogues and leverage frameworks such as the ELI’s ‘Three-Factor Approach’ to shape a more effective governance structure.

By advancing regulatory clarity, the EU can foster a thriving AI landscape where innovation coexists with robust ethical and legal safeguards. As global competition intensifies, the EU’s ability to navigate these challenges will not only influence its internal technological ecosystem but also establish benchmarks for global AI regulation. For businesses, aligning strategies with these regulatory developments will be crucial to maintaining compliance and competitiveness in an increasingly dynamic AI-driven world.

More Insights

The Perils of ‘Good Enough’ AI in Compliance

In today's fast-paced world, the allure of 'good enough' AI in compliance can lead to significant legal risks when speed compromises accuracy. Leaders must ensure that AI tools provide explainable...

European Commission Unveils AI Code of Practice for General-Purpose Models

On July 10, 2025, the European Commission published the final version of the General-Purpose AI Code of Practice, which aims to provide a framework for compliance with certain provisions of the EU AI...

EU Introduces New Code to Streamline AI Compliance

The European Union has introduced a voluntary code of practice to assist companies in complying with the upcoming AI Act, which will regulate AI usage across its member states. This code addresses...

Reforming AI Procurement for Government Accountability

This article discusses the importance of procurement processes in the adoption of AI technologies by local governments, highlighting how loopholes can lead to a lack of oversight. It emphasizes the...

Pillar Security Launches Comprehensive AI Security Framework

Pillar Security has developed an AI security framework called the Secure AI Lifecycle Framework (SAIL), aimed at enhancing the industry's approach to AI security through strategy and governance. The...

Tokio Marine Unveils Comprehensive AI Governance Framework

Tokio Marine Holdings has established a formal AI governance framework to guide its global operations in developing and using artificial intelligence. The policy emphasizes transparency, human...

Shadow AI: The Urgent Need for Governance Solutions

Generative AI (GenAI) is rapidly becoming integral to business operations, often without proper oversight or approval, leading to what is termed as Shadow AI. Companies must establish clear governance...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...

Fragmented Futures: The Battle for AI Regulation

The article discusses the complexities of regulating artificial intelligence (AI) as various countries adopt different approaches to governance, resulting in a fragmented landscape. It explores how...