Understanding AI Types for Effective Governance

Overview of AI Types: Understanding to Better Govern

Artificial Intelligence (AI) today encompasses a wide diversity of technologies, models, and use cases. This plurality makes their understanding essential for organizations to grasp their impacts, identify associated risks, and define appropriate frameworks for responsibility and governance.

The AI System: The Foundation of the AI Ecosystem

Before examining the different categories of AI in detail, it is necessary to focus on the central concept around which the entire European framework is built: the AI system.

This notion constitutes the anchor point of the regulatory framework, as it defines the scope of application of the requirements, responsibilities, and control mechanisms provided for by the regulation.

Legal definition of an AI system under the AI Act: According to Article 3(1) of the AI Act, an AI system means:

‘A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’

This definition highlights several structuring elements:

  • Automation
  • The degree of autonomy
  • The inference capability
  • The potential impact of the outputs

In practice, the AI system is the primary object of regulation: risk classification, compliance obligations, controls, and sanctions apply to it.

An AI system may rely on one or more models, be open-source or proprietary, and be general-purpose or specialized, without these elements affecting its qualification as an AI system.

Example of an AI system: An AI-based candidate pre-screening system automatically analyzes CVs using an AI model in order to produce scores or recommendations.

AI Systems Outside the Scope of the AI Act

It should be specified that certain AI systems, although they meet general technical criteria, may be considered outside the scope of the AI Act.

These situations concern categories such as:

  • Traditional mathematical optimization: Systems aimed solely at improving classical optimization methods.
  • Data processing through fixed instructions: Tools relying on deterministic and predefined instructions.
  • Descriptive analysis, testing, and visualization: Systems limited to data description without producing recommendations.
  • Classic heuristic systems: Programs based on fixed rules without learning capability.
  • Simple statistical rules: Systems using basic estimates without handling complex patterns.

Moreover, understanding an AI system implies not confusing it with its technical components, foremost among which is the AI model.

The AI Model: The Technical Foundation of the System

Definition of an AI model: An AI model refers to a mathematical or computational representation obtained through a training process based on data and used to perform inference.

It enables the transformation of input data into outputs such as predictions, classifications, recommendations, or decisions, according to a learned function.

Example of an AI model: A fraud detection model specifically trained to identify suspicious banking transactions based on historical data.

Definition of a general-purpose AI model: AI models are not directly targeted by the AI Act, but they are subject to specific regulation when classified as general-purpose AI models.

These models are capable of performing a wide range of distinct tasks and can be integrated into various downstream systems.

Example of a general-purpose AI model: A versatile language model, such as OpenAI’s GPT-4, capable of generating, summarizing, translating, or analyzing text.

General-Purpose AI Models Presenting Systemic Risks

Some general-purpose AI models present specific risks known as systemic risks.

Example of a model presenting systemic risks: A large general-purpose AI model trained on massive volumes of data integrated into numerous online services may present systemic risks within the meaning of the AI Act.

AI Model and AI System: Integration and Responsibilities

Unlike the AI system, the AI model does not directly interact with the end user. It produces effects only once integrated into a software environment.

This integration transforms a model into an operational AI system capable of influencing decisions or real-world environments.

The Issue of Open Source in AI

Open-source AI models: According to the AI Act, open-source AI models are published under a free and open license allowing access, use, modification, and distribution.

These models play a role in innovation while introducing specific obligations depending on use and risk level.

Example of an open-source model: Mistral 7B by Mistral AI is an open-source language model published under an open license.

Chatbots: The Conversational Interface

A chatbot is an AI system designed to simulate a conversation and provide information, assistance, or a service.

Chatbots are subject to the AI Act and various obligations when deployed in sensitive contexts.

AI Agents: From Tool to Autonomy

AI agents are AI systems that automate complex tasks, make decisions, and execute actions without human intervention.

Example of an AI agent: An automated assistant tasked with sorting incoming emails.

Agentic AI: Orchestration and Complexity

Agentic AI represents a paradigm shift characterized by collaboration between multiple agents and structured autonomy.

Why This Inventory is Essential for AI Governance

The diversity of AI technologies makes understanding different types of AI deployed within organizations indispensable for effective governance.

Identifying the types of AI used facilitates a more accurate risk assessment and helps in structuring coherent and sustainable AI governance.

At Naaia, organizations are supported in mapping, governing, and ensuring compliance across all their AI systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...