Defining AI: Where Do We Draw the Line?

How Many Neurons Must a System Compute Before You Can Call It AI?

In the realm of artificial intelligence (AI), a significant question arises: when can we definitively classify a system as AI? This dilemma mirrors the philosophical inquiry posed by Bob Dylan: “How many roads must a man walk down, before you can call him a man?” Just as this question lacks a clear answer, so too does the inquiry into the nature of AI systems.

The EU’s AI Act (Regulation 2024/1689) grapples with this complex issue by attempting to differentiate traditional software from AI systems. However, the guidelines issued by the Commission seem to draw arbitrary distinctions that fail to provide clarity.

Where Do We Draw the Line?

The guidelines focus heavily on the differences between various information processing techniques. The challenge lies in the fact that there is no inherent distinction between the basic computational operations underpinning these techniques, whether they are deemed AI or traditional software. Both categories rely on the same core operations performed by computational hardware.

For instance, a neural network is generally expected to be classified as AI. However, a single neuron in such a network performs basic mathematical operations: multiplication, addition, and normalization—similar to a simple linear regression model. Yet, while the AI Act classifies the latter as traditional software, a sufficiently large network of interconnected neurons is recognized as an AI system. This raises the question: why the discrepancy?

The guidelines provide no sensible criteria for determining when a system transitions from basic computation to AI-driven inference. Is the classification based on whether a model is a single-layer or multi-layer neural network? Or does it depend on the use of predefined rules versus optimization for performance?

The “Inference” Problem: AI vs. Not-AI

According to the guidelines, inference is the key characteristic that separates AI from traditional software. However, many non-AI systems also possess the ability to infer:

  • Rule-based expert systems derive conclusions from encoded knowledge;
  • Bayesian models update probabilities dynamically;
  • Regression models predict outcomes based on training data.

Despite their similar functionalities, the guidelines exclude these systems from the AI definition while including deep learning models that perform nearly identical functions on a larger scale. This creates an arbitrary classification issue, where a simple statistical model is not classified as AI, yet a neural network performing similar computations is.

Adaptiveness vs. Pre-trained Models

Another criterion in the AI Act definition is adaptiveness, referring to a system’s ability to learn or change behavior after deployment. However, there is no clear distinction between modern AI techniques and traditional information processing methods. For example:

  • Many modern machine learning systems do not adapt post-deployment (e.g., a static deep learning model); and
  • Conversely, older systems can adapt dynamically to the data being processed (e.g., optimization algorithms that refine parameters over time).

If a static neural network is regarded as AI while a dynamically updating non-ML system is not, the guidelines fail to capture what truly constitutes adaptability.

A Focus on Form Over Function

The guidelines attempt to differentiate AI from traditional software based on techniques rather than functionality. They classify:

  • Machine learning, deep learning, and logic-based AI as AI;
  • Classical statistical methods, heuristics, and certain optimization techniques as non-AI.

However, in practice, these techniques often blend together. Why should an advanced decision tree classifier be classified as AI while a complex Bayesian network is not? Such distinctions create a regulatory ‘cliff edge’, imposing significant burdens on developers and users by arbitrarily excluding certain approaches with similar real-world impacts.

A More Practical Approach: AI as a Spectrum

A more effective regulatory framework might define AI based on its functional characteristics, particularly the levels of adaptability and autonomy a system exhibits. This approach is reminiscent of the UK Government’s 2022 proposal for defining regulated AI, which suggested assessing AI systems based on two key qualities:

  1. Adaptability – The extent to which a system can change its behavior over time, especially in unpredictable ways;
  2. Autonomy – The degree to which a system can operate without direct human oversight, particularly in situations where its decisions carry real-world consequences.

Under this model, the higher a system’s adaptability and autonomy, the greater the regulatory concern. For instance, a highly adaptable and autonomous system poses the greatest risk, potentially making decisions that evolve beyond human control. Conversely, a system with limited adaptability and low autonomy presents minimal risk, requiring little to no regulatory intervention.

Conclusion: The Answer is Still Blowin’ in the Wind

Before the Commission’s guidance, the AI Act’s definition of an AI system could have been summarized as encompassing any reasonably complex large-scale computing system. However, after the guidance, the situation appears less clear. The definition still seems to include any complex computing system, but now features a series of seemingly arbitrary exceptions based on specific techniques rather than fundamental capabilities.

As a result, the guidance raises more questions than it answers. What truly distinguishes AI from non-AI? Where is the dividing line between “basic data processing” and “inference”? At what point does an optimization algorithm become AI? Rather than resolving these ambiguities, the Commission’s attempt to define AI feels like an exercise in drawing boundaries where none naturally exist.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...