Rethinking Governance in the Age of Agentic AI

What AI Agents Can Teach Us About NHI Governance

Artificial intelligence (AI) is a broad field with numerous practical applications. In recent years, we have witnessed explosive growth in generative AI, driven by systems like ChatGPT, Copilot, and other interactive tools that assist developers in writing code and users in creating content. More recently, we have seen the rise of Agentic AI, where orchestrators coordinate actions across multiple AI agents to perform tasks on behalf of users.

While this may sound futuristic, the reality in 2026 is simpler. AI systems, regardless of their deployment, are essentially processes running on machines. They reside on laptops, in containers, within virtual machines, or deep in cloud environments. Fundamentally, they are software executing instructions—probabilistic rather than hard-coded deterministic programs. Like every subsystem we have built, they require safe communication.

The Challenge of NHI Governance

As we rush to adopt agentic AI, we are making a familiar mistake: focusing on capability and speed while neglecting non-human identity (NHI) security and governance. This is evident in how we connect AI tools to sensitive systems (repositories, cloud, ticketing, secrets) without consistently applying the principle of least privilege. This gap has persisted for years in continuous integration (CI) systems, background jobs, service accounts, and automation. Agentic AI is not inventing this gap but is rapidly widening it.

Trust as the Central Element

In any system—whether it’s a chatbot, CI worker, or a long-running daemon—security ultimately comes down to trust. Key questions arise: Who is making the request? What are they allowed to do? Under what specific conditions?

Modern architectures often focus on zero trust, yet many still rely on fragile access factors, such as long-lived secrets, often manifested as API keys or never-expiring certificates. The real risk lies in keeping static tokens in environment variables, dot files, build logs, or shared vault paths accessible to numerous processes, including AI tools. When access keys leak, anyone who finds or steals them can exploit them until someone notices.

What zero trust truly requires is a separation of concerns. Authentication should verify the identity of an entity, while authorization should clearly define the actions that entity is permitted to take. Without this separation, we end up granting broad permissions that are difficult to track, revoke, and highly attractive to attackers.

Understanding NHIs

NHI governance involves anything that is not human but still connects to other running systems, including bots, scripts, workloads, service accounts, and CI jobs. Now, this list also includes agents and agentic systems. The control plane remains the same: identity, ownership, lifecycle, and permissions. Agentic AI does not alter this reality; it merely changes how we interact with these systems.

We assign names and personalities to AI agents, treating them like coworkers rather than mere workloads. This anthropomorphization subtly shifts our perspective on trust and responsibility, which can be beneficial.

Agents in CI Pipelines and Terminals

When we treat agents as actors, similar to humans, it becomes easier to apply established identity patterns. Standards like OAuth and OpenID Connect exist because we have learned over decades that standing privilege does not securely scale. Short-lived, verifiable credentials usually outperform permanent keys. Scoped permissions are preferable to broad access, particularly from a safety perspective.

However, this analogy does break down in certain areas. For instance, since agents lack fingerprints or the ability to authenticate via traditional multi-factor methods, securing background processes remains a challenge. Nonetheless, the core idea holds: every entity accessing a system should be provable, attributable, and constrained.

Continuous integration pipelines have evolved beyond simple build scripts. They now involve complex processes where code becomes reality, pulling in dependencies and relying on task runners, ideally passing through testing and review processes. If an agent can read a failing build, modify code, open a pull request, and iterate until successful, that enhances productivity. However, it also requires access to repositories, build logs, artifacts, and sometimes deployment pathways. CI environments have a history of being over-permissioned because teams prioritize uptime and speed over stringent security postures.

Risks in the Terminal and Browsers

In terminal contexts, the risk is less glamorous and more common. Terminals are fraught with implicit trust, often featuring environment variables, config files, and debugging outputs. These practices made sense when only humans were accessing the developer’s machine. However, agents in terminal contexts can act quickly, increasing the likelihood of exposing secrets unless adequate guardrails, including redaction and scanning, are in place.

Browser agents operate within authenticated sessions, heightening stakes. If an agent can navigate internal tools, approve actions, download data, or change configurations, questions arise about the identity it employs and its permissions. Can actions be reconstructed, and edge cases tuned?

The Need for Rigorous Governance

Framing agents as NHIs is vital. It avoids the fantasy and forces us to confront critical questions: Who owns the agent? What permissions does it have? What is its lifecycle? How do we observe and audit its actions?

If these questions cannot be answered, an organization is not effectively managing agentic AI; it is merely running ungoverned automation at scale, which is inherently dangerous.

Scaling Governance Through Inventory

Governance that scales begins with inventory and alignment. Treating NHIs with rigor sounds straightforward until one attempts to implement it. The difficult part is not the principle, but the execution. It starts with understanding existing credentials, services, agents, and automations. This inventory is crucial for risk management, as one cannot mitigate risks that remain unseen.

Accountability for permissions must also be established. Someone must have provisioned access to every agent—whether explicitly through tokens or implicitly by running within authenticated sessions. The governance challenge lies in ensuring that every access path has a named owner, adheres to least-privilege principles, and features auditability.

The organizational challenge necessitates collaboration across traditionally siloed departments. No single team can manage identity and access at scale across all humans, workloads, agents, and future developments.

Conclusion

Agentic AI serves as a forcing function, illuminating identity failures we have tolerated for years. Any entity capable of acting on your behalf must be treated with the same seriousness as human access. Transformation will not occur overnight; it requires time for inventory, cleanup, and migration across teams and technologies. Alignment is more crucial than the tools themselves. Without a shared strategy and governance, agentic AI will only accelerate existing failures, leading to breaches and incidents.

All non-human identities face fundamental challenges, including the need for verifiable identity, least-privilege access, and continuous oversight. Agentic AI is not a special case; it is a stress test. The teams that recognize this early and establish robust governance will succeed in navigating the evolving landscape of AI.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...