Entro Unveils AI Governance Tool for Enhanced Security in Enterprises

Entro Launches AI Agent Governance Tool for Enterprises

Entro Security has unveiled a new governance product designed to assist companies in tracking and controlling how artificial intelligence agents connect to corporate systems. As businesses grapple with understanding which tools are in use, the data they can access, and the identities behind them, this product offers a timely solution.

Introducing Agentic Governance & Administration (AGA)

The product, named Agentic Governance & Administration (AGA), targets security and identity teams responsible for managing the increasing use of AI assistants, agent platforms, and locally run agents within enterprise environments.

As organizations rapidly adopt AI tools, initial access often stems from a simple connection made by a developer, employee, or business team. However, the oversight of these connections frequently lags, leaving security teams to determine the extent of access, the permissions granted, and their appropriateness.

Applying Identity Governance Principles

AGA utilizes established identity governance principles to manage AI-related access. Key principles include:

  • Inventory
  • Ownership
  • Least-Privilege Access
  • Auditability
  • Enforcement

Entro contends that conventional identity governance tools fall short when it comes to AI agents, as the acting entity is often a service, local agent, or software process using tokens, service accounts, API keys, or secrets rather than a human user.

Building an AI Agent Profile

The system constructs what Entro describes as an AI agent profile by merging three sets of data:

  • Sources: Where agents are identified, including endpoint telemetry, agent development platforms, and cloud environments.
  • Targets: The enterprise applications, assets, and systems that an agent interacts with.
  • Identities: Both human and non-human accounts, as well as the secrets used for authentication.

By integrating these elements, Entro aims to provide a comprehensive view of how an AI agent operates within an organization, rather than treating endpoint activity, cloud behavior, and identity management as separate challenges.

Addressing Shadow AI

A key feature of the product focuses on uncovering what Entro terms shadow AI. This encompasses not only unsanctioned use of AI software-as-a-service products and large language model tools but also locally running agents and workstation-based AI clients.

AGA integrates with endpoint detection and response tools to identify AI clients and local runtimes on employee devices. Furthermore, it connects with agent foundries such as AWS Bedrock and Copilot Studio, as well as cloud service providers, to discover agents and the non-human identities they rely on, including OAuth applications, IAM roles, and service accounts.

Monitoring and Enforcement

The second aspect of AGA emphasizes monitoring and enforcement. According to Entro, this product provides visibility into MCP activity, the tools agents invoke, and the services they connect to. It also includes policy controls for approved MCP targets and AI client behavior, along with audit trails that document allowed and blocked activity to mitigate the risk of exposing sensitive data and secrets.

This approach reflects a broader shift in enterprise security, where mere discovery of AI tools is no longer sufficient. Security teams increasingly seek to understand not just the existence of AI tools but also whether they operate within policy, if their access can be limited, and how to review their activities post-factum.

Conclusion

As organizations navigate the complexities of AI adoption, Entro’s AGA aims to empower security and identity teams to map AI connections, review permissions, and enforce policies. With AI use becoming commonplace across enterprise systems, this tool is positioned as a critical component in maintaining oversight and control over AI integration.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...