Entro Launches AI Agent Governance Tool for Enterprises
Entro Security has unveiled a new governance product designed to assist companies in tracking and controlling how artificial intelligence agents connect to corporate systems. As businesses grapple with understanding which tools are in use, the data they can access, and the identities behind them, this product offers a timely solution.
Introducing Agentic Governance & Administration (AGA)
The product, named Agentic Governance & Administration (AGA), targets security and identity teams responsible for managing the increasing use of AI assistants, agent platforms, and locally run agents within enterprise environments.
As organizations rapidly adopt AI tools, initial access often stems from a simple connection made by a developer, employee, or business team. However, the oversight of these connections frequently lags, leaving security teams to determine the extent of access, the permissions granted, and their appropriateness.
Applying Identity Governance Principles
AGA utilizes established identity governance principles to manage AI-related access. Key principles include:
- Inventory
- Ownership
- Least-Privilege Access
- Auditability
- Enforcement
Entro contends that conventional identity governance tools fall short when it comes to AI agents, as the acting entity is often a service, local agent, or software process using tokens, service accounts, API keys, or secrets rather than a human user.
Building an AI Agent Profile
The system constructs what Entro describes as an AI agent profile by merging three sets of data:
- Sources: Where agents are identified, including endpoint telemetry, agent development platforms, and cloud environments.
- Targets: The enterprise applications, assets, and systems that an agent interacts with.
- Identities: Both human and non-human accounts, as well as the secrets used for authentication.
By integrating these elements, Entro aims to provide a comprehensive view of how an AI agent operates within an organization, rather than treating endpoint activity, cloud behavior, and identity management as separate challenges.
Addressing Shadow AI
A key feature of the product focuses on uncovering what Entro terms shadow AI. This encompasses not only unsanctioned use of AI software-as-a-service products and large language model tools but also locally running agents and workstation-based AI clients.
AGA integrates with endpoint detection and response tools to identify AI clients and local runtimes on employee devices. Furthermore, it connects with agent foundries such as AWS Bedrock and Copilot Studio, as well as cloud service providers, to discover agents and the non-human identities they rely on, including OAuth applications, IAM roles, and service accounts.
Monitoring and Enforcement
The second aspect of AGA emphasizes monitoring and enforcement. According to Entro, this product provides visibility into MCP activity, the tools agents invoke, and the services they connect to. It also includes policy controls for approved MCP targets and AI client behavior, along with audit trails that document allowed and blocked activity to mitigate the risk of exposing sensitive data and secrets.
This approach reflects a broader shift in enterprise security, where mere discovery of AI tools is no longer sufficient. Security teams increasingly seek to understand not just the existence of AI tools but also whether they operate within policy, if their access can be limited, and how to review their activities post-factum.
Conclusion
As organizations navigate the complexities of AI adoption, Entro’s AGA aims to empower security and identity teams to map AI connections, review permissions, and enforce policies. With AI use becoming commonplace across enterprise systems, this tool is positioned as a critical component in maintaining oversight and control over AI integration.