Understanding AI Diversity for Effective Governance

Overview of AI Types: Understanding for Better Governance

Artificial Intelligence (AI) encompasses a wide range of technologies, models, and applications. This diversity makes understanding AI crucial for organizations to grasp its impacts, identify associated risks, and establish appropriate frameworks for accountability and management.

Before effectively deploying, regulating, or governing AI solutions, it is essential to clarify fundamental concepts. This article provides a synthetic overview of the main types of AI, offering clear and structured reference points.

The AI System: The Foundation of the AI Ecosystem

Before diving into the various categories of AI, it is important to focus on the central concept that underpins the entire European framework: the AI system.

This notion serves as the anchor for the regulatory framework, defining the scope of application for requirements, responsibilities, and control mechanisms.

Legal Definition of the AI System According to the AI Act:

According to Article 3 (1) of the AI Act, an AI system is defined as:

“An automated system designed to operate at various levels of autonomy and capable of adapting post-deployment, which, for explicit or implicit objectives, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

This definition highlights several structural elements:

  • Automation
  • Degree of Autonomy
  • Inference Capability
  • Potential Impact of Outputs

In practice, the AI system is the primary subject of regulation: risk classification, compliance obligations, controls, and sanctions pertain to it.

An AI system may rely on one or more models, be open-source or proprietary, and can have a general or specialized character, without these elements altering its qualification as an AI system.

Example of an AI System: An AI resume screening system automatically analyzes CVs using an AI model to produce scores or recommendations.

AI Systems Exempt from the AI Act

It is important to note that certain AI systems, while meeting general technical criteria, may be considered outside the scope of the AI Act if they do not fall under the functional definition of AI as per the regulation or the risks targeted by European regulation. These situations include:

  • Traditional Mathematical Optimization: Systems aimed solely at improving or accelerating classical optimization methods, without learning or altering decision-making logic (e.g., accelerated physical simulations, parameter approximations).
  • Data Processing by Fixed Instructions: Tools relying exclusively on deterministic and predefined instructions, without modeling or reasoning (e.g., sorting, filtering, or extracting data via SQL).
  • Descriptive Analysis, Testing, and Visualization: Systems limited to data description, standard statistical tests, or visualizing indicators without producing recommendations or predictions.
  • Classic Heuristic Systems: Programs based on fixed rules or heuristics, without learning or self-improvement capabilities.
  • Simple Statistical Rules: Systems using basic estimations without managing complex schemas.

Understanding the AI system also requires distinguishing it from its technical components, primarily the AI model.

The AI Model: Technical Foundation of the System

Definition of the AI Model: An AI model represents a mathematical or computational construct derived from a learning process based on data, used to make inferences.

It transforms input data into outputs such as predictions, classifications, recommendations, or decisions, based on a learned function. Thus, it constitutes the algorithmic core of automated reasoning without having an operational purpose in itself.

Example of an AI Model: A fraud detection model specifically trained to identify suspicious banking transactions from historical data.

General-Purpose AI Model Definition: Generally, AI models are not directly targeted by the AI Act, as they are considered fundamental components of AI systems.

They are only subject to specific regulation when they exhibit characteristics qualifying them as general-purpose AI models, which introduces a distinct classification within the AI Act.

Such models are defined as:

“An AI model, including when that AI model is trained using a large number of data points with large-scale self-supervision, exhibiting significant generality and capable of competently performing a wide range of distinct tasks, regardless of how the model is marketed.”

Example of a General-Purpose AI Model: A versatile language model, such as GPT-4 from OpenAI, capable of generating, summarizing, translating, or analyzing text.

General-Purpose Models with Systemic Risks

Some general-purpose AI models present specific risks, termed systemic risks. These refer to:

“Risks related to the high-impact capabilities of general-purpose AI models that could have significant negative effects on public health, safety, fundamental rights, or society as a whole.”

Example of a Model that May Present Systemic Risks: A large general-purpose AI model trained on massive volumes of textual, visual, and audio data, integrated into various online services, may present systemic risks as defined by the AI Act.

According to the European Commission, general-purpose AI models trained with a cumulative computing power exceeding 1025 FLOP are presumed to present systemic risk.

Regardless of these thresholds, the European Commission can qualify a model as presenting systemic risk based on defined criteria.

These models are subject to enhanced transparency obligations and specific risk assessments.

Model and AI System: Integration and Responsibilities

Unlike the AI system, the AI model does not interact directly with the end user. It only produces concrete effects once integrated into a software environment, combined with data, interfaces, business rules, and organizational processes.

This integration transforms a model into an operational AI system capable of influencing real decisions or environments.

A single model can be reused within several distinct AI systems, each pursuing its own objectives and presenting different levels of risk, usage contexts, and responsibilities. Therefore, the nature of the AI system depends not only on the model used but also on how it is deployed and operated.

Focus: General-Purpose Models and Their Derived Systems

In contrast to AI systems based on specialized models, there exist AI systems that integrate general-purpose AI models: general-purpose AI systems.

These systems are defined as:

“An AI system based on a general-purpose AI model that has the capacity to respond to various purposes, both for direct use and integration into other AI systems.”

This capacity for generalization and reuse heightens governance, traceability, and accountability concerns, especially when these models are deployed at scale or integrated into sensitive contexts.

Governance Challenges

The distinction between model and system is crucial for AI governance. Risks, obligations, and responsibilities do not stem from the model in isolation but from its integration and usage within a deployed system, as well as the system’s objectives and context of use.

Understanding the role of the model as a technical component enables better mapping of AI systems, identifying technological dependencies, and structuring appropriate governance.

The Open Source Question in AI

Open Source Models: According to Articles 53 (2) and 54 (6) of the AI Act, open-source AI models are defined as:

  • Published under a free and open license, allowing consultation, use, modification, and distribution.
  • Parameters, including weights, model architecture, and usage information, are made public.
  • Not subject to direct monetization, such as exclusive paid hosting.

The AI Act explicitly recognizes their role in innovation while introducing specific obligations based on usage and risk level.

Example of an Open Source Model: Mistral 7B from Mistral AI is an open-source language model, published under an open license, with accessible weights and architecture.

Open Source and Governance Challenges

Once integrated into a deployed AI system, an open-source model may be subject to regulatory obligations. If a general-purpose model meets the conditions outlined above, it is subject to the following obligations:

  • Establish a compliance policy with EU copyright law, including identifying and respecting rights reservations.
  • Produce a summary of the content used for training.

However, if it does not present systemic risks, it may benefit from exemptions regarding documentation, information for integrators, and representation for suppliers based in third countries.

Challenges for Organizations

While utilizing open-source AI components offers opportunities for innovation, it complicates the governance of AI systems. It makes it more difficult to trace models and their evolutions, ensure quality and consistency of documentation, and evaluate risks related to biases, uses, and potential impacts.

Chatbots: The Conversational Interface

Definition of a Chatbot: A chatbot (or conversational assistant) is an AI system designed to simulate a conversation in a given channel and provide information, assistance, or service.

Chatbots can respond to FAQs, check order status, recommend products, or guide users through forms. Unlike agents, traditional chatbots do not pursue objectives, plan strategies, or reason through multiple steps.

As such, they rely on user messages without deep contextual adaptation.

Regulatory Challenges

Chatbots are recognized as AI systems by the European Commission and fall fully under the scope of the AI Act. Consequently, they are subject to the following obligations:

  • Transparency (Article 52): Clear information to users that they are interacting with an AI system, except in strictly regulated exceptions.
  • When deployed in sensitive contexts (HR, public services, health, or education), a chatbot may be classified as a high-risk AI system, leading to enhanced requirements including risk management throughout the lifecycle, human oversight measures, and obligations for technical documentation and post-deployment monitoring.

A chatbot is never a neutral tool; as it generates responses through inference and can influence user behaviors or decisions, it engages the responsibility of those who design, integrate, and operate it.

AI Agents: From Tools to Autonomy

AI agents refer to software systems with specific characteristics:

  • They rely on an AI model pursuing a defined or undefined objective, without significant development or modification.
  • Accessible through a studio where users can adjust parameters.
  • Configured to automate complex, contextualized tasks, make decisions, and execute actions without continuous human intervention.

AI agents embody the notion of agency, meaning the ability of a system to:

  • Act autonomously,
  • Initiate actions,
  • Plan sequences,
  • Adapt to changing contexts,
  • Pursue high-level objectives without ongoing human supervision.

Example of an AI Agent: An AI agent may automatically sort incoming emails, analyzing each message and identifying its category (commercial, support, urgent), then applying the appropriate action such as archiving or ticket creation.

Agentic AI: Orchestration and Complexity

Agentic AI extends beyond the agent framework. According to the taxonomy proposed by Sapkota et al. (2025), agentic AI represents a paradigm shift compared to traditional AI agents. Its characteristics include:

  • Collaboration among multiple agents within the same system,
  • Dynamic decomposition of tasks into subtasks suited to the context,
  • Persistent memory allowing long-term historical exploitation,
  • Orchestrated autonomy that is structured and coordinated, exceeding the capabilities of a standalone agent.

Example of Agentic AI: In a multi-agent system, each agent executes a specific subtask to achieve a goal, and their efforts are coordinated through AI orchestration features.

Governance Challenges

AI agents and agentic systems pose significant governance challenges. Coordination among agents can lead to emergent behaviors and unforeseen effects that are difficult to anticipate.

Increased autonomy complicates accountability and maintaining effective human oversight, while the opacity of models limits transparency, explainability, and auditability of decisions.

These challenges necessitate reinforced governance mechanisms, alignment, and bias management.

Why This Inventory is Essential for AI Governance

The diversity of artificial intelligence technologies makes it crucial to have a nuanced understanding of the different types of AI deployed within organizations. This inventory serves as a prerequisite for effective governance.

Precisely identifying the types of AI used allows organizations to determine applicable regulatory obligations, which vary based on the nature of the systems, their purposes, and capabilities. It also facilitates a more accurate risk assessment, taking into account the level of autonomy and potential impact.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...