Entro Targets AI Access Blind Spots With New Governance Layer for Agents
In the rapidly evolving landscape of Enterprise AI adoption, organizations are making incremental strides toward integrating artificial intelligence into their operations. However, this gradual approach often leads to complexities that can create significant visibility problems regarding AI agents and their access to internal systems.
The Gap in AI Governance
As developers connect tools to large language models (LLMs) and teams install AI applications, they inadvertently create a web of connections that is challenging to monitor. Entro Security is addressing this issue with its new Agentic Governance & Administration (AGA) capability, emphasizing the need for AI agents to be governed similarly to traditional identities.
The Visibility Problem Behind AI Adoption
Security teams are accustomed to managing access for human users and applications, but AI agents operate differently. They are created rapidly, run continuously, and rely on non-human identities such as API keys, tokens, and service accounts. This creates a visibility problem, leaving security teams unable to answer fundamental questions about the agents in their environments and their access capabilities.
Itzik Alvas, CEO and co-founder of Entro Security, highlights that existing Identity Governance and Administration (IGA) tools were not designed to manage autonomous agents. AI agents can connect quickly and gain access through various means, raising questions about their identity, ownership, and appropriateness of access.
How AI Changes the Access Model
Traditional access models tie permissions to logins or specific applications. In contrast, AI agents exhibit a more fluid access pattern, depending on how they connect to various systems and the permissions they inherit. Consequently, the risk associated with AI agents is no longer linked to a single account but spreads across multiple integrations and data access paths, complicating the visibility of potential exposure.
Beyond Basic “Shadow AI” Detection
While many vendors discuss “shadow AI,” most tools focus solely on detecting unknown applications. Entro aims to connect various signals into a cohesive view that encompasses endpoint activity, cloud behavior, and the identities enabling access. Alvas explains that the uniqueness of AGA lies in its ability to correlate these three layers, moving beyond simple detection to understanding how AI agents operate and whether their access can be effectively governed.
From Discovery to Control
AGA addresses both sides of the AI governance problem. It identifies the existence of AI agents and maps their interactions with systems while providing visibility into their activities and ensuring alignment with established policies. This approach offers security teams less guesswork and more context when evaluating risks.
Implications for Managed Security Service Providers (MSSPs)
The challenge is further compounded for Managed Security Service Providers (MSSPs) managing multiple environments. AI agents do not confine themselves to a single organization; they are proliferating across client accounts, each with different tools and integrations. Alvas affirms that AGA is part of the Entro platform, allowing MSSPs to leverage the same governance and control model across the environments they manage.
As AI agents increasingly act like users—accessing systems, moving data, and triggering actions without direct human involvement—existing governance models often fail to account for their unique behaviors. By addressing these gaps, Entro’s AGA reflects a broader shift in identity management that includes not just people and applications but also agents and machine-driven access paths. As AI adoption continues to grow, the ability to monitor and control these access paths will likely become a fundamental requirement for organizations, rather than an advanced feature.