Defining AI: Where Do We Draw the Line?

How Many Neurons Must a System Compute Before You Can Call It AI?

In the realm of artificial intelligence (AI), a significant question arises: when can we definitively classify a system as AI? This dilemma mirrors the philosophical inquiry posed by Bob Dylan: “How many roads must a man walk down, before you can call him a man?” Just as this question lacks a clear answer, so too does the inquiry into the nature of AI systems.

The EU’s AI Act (Regulation 2024/1689) grapples with this complex issue by attempting to differentiate traditional software from AI systems. However, the guidelines issued by the Commission seem to draw arbitrary distinctions that fail to provide clarity.

Where Do We Draw the Line?

The guidelines focus heavily on the differences between various information processing techniques. The challenge lies in the fact that there is no inherent distinction between the basic computational operations underpinning these techniques, whether they are deemed AI or traditional software. Both categories rely on the same core operations performed by computational hardware.

For instance, a neural network is generally expected to be classified as AI. However, a single neuron in such a network performs basic mathematical operations: multiplication, addition, and normalization—similar to a simple linear regression model. Yet, while the AI Act classifies the latter as traditional software, a sufficiently large network of interconnected neurons is recognized as an AI system. This raises the question: why the discrepancy?

The guidelines provide no sensible criteria for determining when a system transitions from basic computation to AI-driven inference. Is the classification based on whether a model is a single-layer or multi-layer neural network? Or does it depend on the use of predefined rules versus optimization for performance?

The “Inference” Problem: AI vs. Not-AI

According to the guidelines, inference is the key characteristic that separates AI from traditional software. However, many non-AI systems also possess the ability to infer:

  • Rule-based expert systems derive conclusions from encoded knowledge;
  • Bayesian models update probabilities dynamically;
  • Regression models predict outcomes based on training data.

Despite their similar functionalities, the guidelines exclude these systems from the AI definition while including deep learning models that perform nearly identical functions on a larger scale. This creates an arbitrary classification issue, where a simple statistical model is not classified as AI, yet a neural network performing similar computations is.

Adaptiveness vs. Pre-trained Models

Another criterion in the AI Act definition is adaptiveness, referring to a system’s ability to learn or change behavior after deployment. However, there is no clear distinction between modern AI techniques and traditional information processing methods. For example:

  • Many modern machine learning systems do not adapt post-deployment (e.g., a static deep learning model); and
  • Conversely, older systems can adapt dynamically to the data being processed (e.g., optimization algorithms that refine parameters over time).

If a static neural network is regarded as AI while a dynamically updating non-ML system is not, the guidelines fail to capture what truly constitutes adaptability.

A Focus on Form Over Function

The guidelines attempt to differentiate AI from traditional software based on techniques rather than functionality. They classify:

  • Machine learning, deep learning, and logic-based AI as AI;
  • Classical statistical methods, heuristics, and certain optimization techniques as non-AI.

However, in practice, these techniques often blend together. Why should an advanced decision tree classifier be classified as AI while a complex Bayesian network is not? Such distinctions create a regulatory ‘cliff edge’, imposing significant burdens on developers and users by arbitrarily excluding certain approaches with similar real-world impacts.

A More Practical Approach: AI as a Spectrum

A more effective regulatory framework might define AI based on its functional characteristics, particularly the levels of adaptability and autonomy a system exhibits. This approach is reminiscent of the UK Government’s 2022 proposal for defining regulated AI, which suggested assessing AI systems based on two key qualities:

  1. Adaptability – The extent to which a system can change its behavior over time, especially in unpredictable ways;
  2. Autonomy – The degree to which a system can operate without direct human oversight, particularly in situations where its decisions carry real-world consequences.

Under this model, the higher a system’s adaptability and autonomy, the greater the regulatory concern. For instance, a highly adaptable and autonomous system poses the greatest risk, potentially making decisions that evolve beyond human control. Conversely, a system with limited adaptability and low autonomy presents minimal risk, requiring little to no regulatory intervention.

Conclusion: The Answer is Still Blowin’ in the Wind

Before the Commission’s guidance, the AI Act’s definition of an AI system could have been summarized as encompassing any reasonably complex large-scale computing system. However, after the guidance, the situation appears less clear. The definition still seems to include any complex computing system, but now features a series of seemingly arbitrary exceptions based on specific techniques rather than fundamental capabilities.

As a result, the guidance raises more questions than it answers. What truly distinguishes AI from non-AI? Where is the dividing line between “basic data processing” and “inference”? At what point does an optimization algorithm become AI? Rather than resolving these ambiguities, the Commission’s attempt to define AI feels like an exercise in drawing boundaries where none naturally exist.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...