Are Today’s Governance Frameworks Ready for Agentic AI?
As AI systems evolve, they are beginning to operate with increasing independence, moving beyond simple rule-following to making decisions, delegating tasks, and adjusting goals without human oversight. These developments present significant governance challenges that current frameworks are ill-equipped to address.
Who Should Be Concerned?
This issue is critical for anyone involved in the development, management, or oversight of AI systems within regulated environments. Traditional assumptions that worked for task-based systems do not apply to agentic systems, which operate autonomously. Without clearer standards regarding autonomy and decision authority, organizations risk being ill-prepared for the actions of these AI agents.
Defining Agentic Systems
An agentic system is capable of pursuing goals, making decisions, and taking actions with little or no human intervention. Such systems can interact with other systems, adapt their strategies dynamically, and determine when to escalate tasks.
This raises essential questions:
- How much freedom should an AI agent have to make decisions?
- Who holds responsibility when an agentic system delegates tasks to another system?
- Can existing oversight structures identify when a system operates outside its defined boundaries?
If your governance model relies on fixed workflows, static approvals, or manual reviews, it is likely inadequate for managing agentic systems.
Current Frameworks for Responsible AI
Three primary frameworks underpin responsible AI programs:
- ISO/IEC 42001
- NIST AI Risk Management Framework
- EU AI Act
ISO/IEC 42001
This international standard outlines requirements for establishing an AI management system, emphasizing documentation, process control, and continual improvement. While it helps organizations define roles and responsibilities, it lacks guidance on:
- Setting or monitoring boundaries for autonomous behavior.
- Defining what decisions an agentic system may make.
- Managing delegation of authority.
NIST AI Risk Management Framework
The NIST framework focuses on identifying, measuring, and managing AI-related risks. It promotes accountability and transparency while recognizing the significance of context. Although flexible enough for agentic systems, it does not:
- Define thresholds for acceptable autonomy.
- Explain how to monitor decision delegation or goal drift over time.
EU AI Act
The EU AI Act is the most comprehensive regulatory framework, imposing specific obligations based on risk classification. High-risk systems are subject to documentation, oversight, and human review requirements. However, it primarily focuses on use cases rather than system behavior, providing no detailed guidance for unexpected AI behavior.
Key Gaps to Address
For those developing or governing agentic systems, existing frameworks omit crucial aspects:
- No standard for autonomy levels.
- No clear approach to delegation.
- No tools to detect autonomy drift.
- No oversight of emergent behavior.
These gaps are not merely theoretical; they are already impacting organizations involved in agentic AI, affecting risk management, compliance, and stakeholder trust.
Support from the RAI Institute
The RAI Institute aims to help organizations bridge these gaps. They assist in operationalizing responsible AI through programs like TrustX Risk Classification, which assesses the risk level of an AI system before applying controls. This ensures appropriate oversight based on real-world impact.
Additionally, the RAISE Pathways program offers over 1,100 mapped AI controls aligned with global standards, enabling organizations to benchmark practices and strengthen governance where existing frameworks fall short.
Their verification and assessment programs define what autonomy means within systems, reviewing decision authority, delegation boundaries, and oversight protocols.
Taking Control of Agentic Systems
With agentic systems already deployed across various industries, organizations must recognize that they are dealing with more than standard automation. Existing controls may be insufficient; thus, it is crucial to:
- Map decision authority.
- Set clear boundaries and escalation points.
- Establish monitoring for autonomy drift.
- Validate governance through independent oversight.
Organizations that act proactively will shape the future of responsible AI rather than react to it.