Introduction
The governance of adaptive AI and agents has become a pressing issue in today’s technological landscape. Discussions around scaling artificial intelligence often revolve around factors such as accuracy, robustness, explainability, and data quality. However, when pilots fail to scale, the common explanation tends to be the immaturity of the technology. This perspective is increasingly inadequate.
The Nature of Drift
Once AI systems are deployed in real operational environments, they do not merely execute static logic. Instead, they continuously interact with users, data, and organizational processes, leading to behavioral changes over time—often without explicit visibility. This phenomenon is known as drift, and in governance discussions, it is often framed as an anomaly or failure mode.
However, under the EU AI Act, this framing is misleading. Drift is not an exception; it is an inherent characteristic of systems that operate with any degree of autonomy. As AI systems grow more adaptive, behavioral change becomes a condition to be governed rather than a risk to be eliminated.
Regulatory Concerns with Drift
From the perspective of the EU AI Act, drift becomes a regulatory concern when behavioral changes push system actions beyond the declared scope of use or into prohibited practices (Article 5). Additionally, it can undermine the effectiveness of risk controls established under the risk management system (Article 9). In such scenarios, obligations for accountability and traceability can degrade over time if drift is not explicitly governed.
Supervisory Dimensions of Drift
Two dimensions are particularly relevant for supervisors:
- Static vs. Adaptive Systems: Adaptive systems can change their behavior post-deployment due to feedback loops, policy updates, or learning mechanisms.
- Accepted vs. Unaccepted Drift: This categorization helps in assessing the level of drift that is tolerable within the system.
Supervisory Quadrants
These dimensions yield four supervisory states:
- Controlled Stability: Static systems with limited drift recognized and periodically corrected, aligning with traditional conformity assessments.
- Drift Waste: Static systems where drift is ignored, leading to compliance erosion through workarounds and manual interventions.
- Drift Blindness: Adaptive systems that evolve without explicit governance, risking accountability and traceability.
- Controlled Growth: Adaptive systems where drift is anticipated, bounded, observable, and accountable, allowing for scalable governance.
Shift in Supervisory Questions
Once drift is acknowledged as inevitable, the nature of supervisory questions changes. The focus shifts from:
- “Is the model accurate?”
- “Can the AI be trusted?”
To questions such as:
- What actions is the system authorized to take?
- Under what mandate and intended purpose?
- Within which operational boundaries?
- What evidence is produced for audit and supervision?
Dimensions of Governability
Three dimensions are essential for ensuring governability:
- Reasoning: Ensuring decisions can be justified at a system level, linking outcomes to policies, inputs, and approvals.
- Action: Clearly defining what the system may and may not do in accordance with its declared purpose.
- Cognition: Supervising how the system’s operating space evolves, ensuring that changes remain observable and reviewable.
Implications for Supervision
As AI systems become more adaptive, supervisory focus will increasingly shift from static conformity to ongoing control, from model internals to system behavior, and from initial certification to continuous accountability. The obligations under the EU AI Act attach to ongoing behavior rather than static design, making it essential for adaptive systems to remain governable throughout their operational lifecycle.
In conclusion, while the EU AI Act does not prohibit adaptive AI, it requires that adaptivity be governed. Institutions that can effectively demonstrate this governance will be well-positioned to scale AI responsibly, while those that cannot will face challenges not due to the technology itself, but because control has not been adequately articulated in regulation.