Defining AI: Where Do We Draw the Line?

How Many Neurons Must a System Compute Before You Can Call It AI?

In the realm of artificial intelligence (AI), a significant question arises: when can we definitively classify a system as AI? This dilemma mirrors the philosophical inquiry posed by Bob Dylan: “How many roads must a man walk down, before you can call him a man?” Just as this question lacks a clear answer, so too does the inquiry into the nature of AI systems.

The EU’s AI Act (Regulation 2024/1689) grapples with this complex issue by attempting to differentiate traditional software from AI systems. However, the guidelines issued by the Commission seem to draw arbitrary distinctions that fail to provide clarity.

Where Do We Draw the Line?

The guidelines focus heavily on the differences between various information processing techniques. The challenge lies in the fact that there is no inherent distinction between the basic computational operations underpinning these techniques, whether they are deemed AI or traditional software. Both categories rely on the same core operations performed by computational hardware.

For instance, a neural network is generally expected to be classified as AI. However, a single neuron in such a network performs basic mathematical operations: multiplication, addition, and normalization—similar to a simple linear regression model. Yet, while the AI Act classifies the latter as traditional software, a sufficiently large network of interconnected neurons is recognized as an AI system. This raises the question: why the discrepancy?

The guidelines provide no sensible criteria for determining when a system transitions from basic computation to AI-driven inference. Is the classification based on whether a model is a single-layer or multi-layer neural network? Or does it depend on the use of predefined rules versus optimization for performance?

The “Inference” Problem: AI vs. Not-AI

According to the guidelines, inference is the key characteristic that separates AI from traditional software. However, many non-AI systems also possess the ability to infer:

  • Rule-based expert systems derive conclusions from encoded knowledge;
  • Bayesian models update probabilities dynamically;
  • Regression models predict outcomes based on training data.

Despite their similar functionalities, the guidelines exclude these systems from the AI definition while including deep learning models that perform nearly identical functions on a larger scale. This creates an arbitrary classification issue, where a simple statistical model is not classified as AI, yet a neural network performing similar computations is.

Adaptiveness vs. Pre-trained Models

Another criterion in the AI Act definition is adaptiveness, referring to a system’s ability to learn or change behavior after deployment. However, there is no clear distinction between modern AI techniques and traditional information processing methods. For example:

  • Many modern machine learning systems do not adapt post-deployment (e.g., a static deep learning model); and
  • Conversely, older systems can adapt dynamically to the data being processed (e.g., optimization algorithms that refine parameters over time).

If a static neural network is regarded as AI while a dynamically updating non-ML system is not, the guidelines fail to capture what truly constitutes adaptability.

A Focus on Form Over Function

The guidelines attempt to differentiate AI from traditional software based on techniques rather than functionality. They classify:

  • Machine learning, deep learning, and logic-based AI as AI;
  • Classical statistical methods, heuristics, and certain optimization techniques as non-AI.

However, in practice, these techniques often blend together. Why should an advanced decision tree classifier be classified as AI while a complex Bayesian network is not? Such distinctions create a regulatory ‘cliff edge’, imposing significant burdens on developers and users by arbitrarily excluding certain approaches with similar real-world impacts.

A More Practical Approach: AI as a Spectrum

A more effective regulatory framework might define AI based on its functional characteristics, particularly the levels of adaptability and autonomy a system exhibits. This approach is reminiscent of the UK Government’s 2022 proposal for defining regulated AI, which suggested assessing AI systems based on two key qualities:

  1. Adaptability – The extent to which a system can change its behavior over time, especially in unpredictable ways;
  2. Autonomy – The degree to which a system can operate without direct human oversight, particularly in situations where its decisions carry real-world consequences.

Under this model, the higher a system’s adaptability and autonomy, the greater the regulatory concern. For instance, a highly adaptable and autonomous system poses the greatest risk, potentially making decisions that evolve beyond human control. Conversely, a system with limited adaptability and low autonomy presents minimal risk, requiring little to no regulatory intervention.

Conclusion: The Answer is Still Blowin’ in the Wind

Before the Commission’s guidance, the AI Act’s definition of an AI system could have been summarized as encompassing any reasonably complex large-scale computing system. However, after the guidance, the situation appears less clear. The definition still seems to include any complex computing system, but now features a series of seemingly arbitrary exceptions based on specific techniques rather than fundamental capabilities.

As a result, the guidance raises more questions than it answers. What truly distinguishes AI from non-AI? Where is the dividing line between “basic data processing” and “inference”? At what point does an optimization algorithm become AI? Rather than resolving these ambiguities, the Commission’s attempt to define AI feels like an exercise in drawing boundaries where none naturally exist.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...