Governing Agentic AI: Strategies to Mitigate Risks

How Can We Govern Agentic AI Systems to Prevent Harm?

None of the seven major AI companies evaluated—including OpenAI, Google DeepMind, and Anthropic—received higher than a D grade in “existential safety planning”. Even modern approaches like NIST’s AI Risk Management Framework still rely on human oversight that autonomous agents may bypass.

Legacy Approaches Fail with Autonomous Systems

Traditional AI governance treats agentic systems as enhanced chatbots rather than autonomous decision-makers. AI compliance frameworks were designed around predictable workflows and constant human oversight.

Each autonomous decision point creates potential failure modes that compound across system operations. The NIST AI RMF provides core functions that guide organizations in managing AI risk through continuous processes.

Identity Becomes the Main Control Plane

Identity-first AI security becomes the primary control plane when AI systems operate across organizational boundaries. Zero Trust architecture is essential since autonomous agents can act unpredictably and access multiple systems.

Continuous verification of every agent’s identity, request, and action prevents unauthorized access and cascading security failures. Enterprises must enable autonomous agent compliance without compromising security.

Three-Tiered Frameworks Provide Scalable Oversight

Leading governance experts advocate layered approaches that scale with risk levels. Organizations can use a three-tiered framework of guardrails that scales with use case risk and potential impact.

For example, a chatbot answering retail questions may only need minimal guardrails like user disclaimers and basic monitoring. However, a banking dispute agent requires rigorous testing, detailed audit logging, and real-time supervision.

Real-Time Intervention Beats After-the-Fact Audits

Agentic systems demand continuous monitoring and immediate intervention capabilities. AI system monitoring is critical when systems operate autonomously to detect agentic hallucinations.

Forward-thinking organizations implement “embedded compliance” by building regulatory requirements directly into system design. This includes real-time monitoring, automated compliance checks, and comprehensive audit trails.

Regulatory Frameworks Struggle to Catch Up

Current regulations inadequately address autonomous AI systems, creating governance gaps. The EU AI Act requires high-risk AI systems to enable effective human oversight.

However, regulations are still evolving, with experts saying none specifically addresses agentic AI. Defining clear standards for “effective” oversight and testing compliance remains challenging.

Implementation Reveals Widespread Unpreparedness

95% of executives said their organizations experienced negative consequences from enterprise AI use in the past two years. 86% of executives aware of agentic AI believed the technology poses additional risks and compliance challenges.

Organizations must institute robust feedback and response systems now while there is still time to shape AI deployment. Currently, approximately 2 percent of papers published at top machine learning venues are safety-relevant.

The question is not whether agentic AI will transform business operations but whether boards will lead that transformation through effective oversight. Governance solutions are being vastly outpaced by capability development.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...

AI in Australian Government: Balancing Innovation and Security Risks

The Australian government is considering using AI to draft sensitive cabinet submissions as part of a broader strategy to implement AI across the public service. While some public servants report...