Identity Security as the Cornerstone of Agentic AI

Why Identity Security is the Foundation for Agentic AI

Across Asia-Pacific’s technology landscape, the conversation around AI has shifted decisively. AI is no longer framed as an emerging capability or a strategic experiment. Agentic systems are already embedded in enterprise operations, interacting with production environments and executing decisions that once sat firmly with human operators.

Autonomous agents now write code, provision infrastructure, query regulated data, and initiate actions across sensitive, critical systems. They operate at a speed and scale that traditional governance models were not designed to accommodate. The central challenge facing enterprises, then, is not whether agentic AI will affect operations, but whether existing control frameworks can evolve fast enough to remain credible.

Addressing AI Governance: Insights from Asia-Pacific Leaders

In Asia-Pacific, AI investment has moved from pilot programs to core infrastructure. IDC forecasts that AI and generative AI investments in the region will reach US$175 billion by 2028, reflecting deployment across a growing range of business functions. The region stands at the forefront of this shift, shaped by high cloud adoption, regulatory maturity, and national approaches that treat digital capability as economic infrastructure.

Within Asian enterprises, identity teams are already seeing a change in demand. A growing share of access requests originates from non-human actors, not traditional system accounts, but adaptive agents that initiate actions and evolve continuously. In parallel, risk and compliance functions are being asked to govern systems that move faster than established review cycles allow. The question that increasingly defines governance discussions is one of accountability. When an autonomous system accesses sensitive data or triggers a regulated process, responsibility must be clearly defined and demonstrable.

Where Identity Security Enables Safe AI Adoption

Every AI agent operates through identity. It authenticates, requests authorization, and exercises privilege. Treating these identities as secondary to human users creates structural blind spots. AI identities require the same rigor applied to employees, partners, and machines, particularly because their potential impact is broader and less predictable.

When AI identities are discoverable, governed, and auditable, organizations can scale agentic systems without relying on informal trust or retrospective controls. Identity security, when designed appropriately, does not impede innovation. Instead, it reduces ambiguity, allowing AI initiatives to progress with clearer accountability rather than unchecked risk.

IDC’s 2025 Future Enterprise Resiliency and Spending (FERS) Survey found that 34% of organizations in the region now prioritize AI governance as generative AI moves into core business workflows. In this context, identity governance becomes a practical mechanism for balancing speed with control, translating ambition into systems that can be defended, audited, and sustained.

Singapore’s “Build Your Own AI” Movement

Singapore’s enterprise culture has long favored internal capability development. Financial institutions, logistics firms, technology companies, and public sector agencies are building their own AI agents to streamline workflows, support decision-making, and reduce operational friction. This momentum can deliver advantages, but it also introduces risk when oversight does not keep pace.

For instance, “shadow AI,” tools that operate outside formal governance, are becoming difficult to inventory, harder to audit, and challenging to defend under regulatory scrutiny, exposing organizations to additional risk.

Gartner predicts that by 2026, 40% of enterprise applications will feature task-specific AI agents, up from less than 5% today. By that same period, more than 80% of enterprises are expected to have deployed generative AI capabilities through APIs or embedded applications. The implication is clear: identity sprawl will accelerate rapidly.

AI agents are emerging quickly, and APAC enterprises are likely to encounter governance pressures earlier than many regions due to deeper adoption and higher expectations of control. Embedding identity security from inception helps ensure AI supports productivity rather than becoming a source of unmanaged exposure.

Establishing a Framework for Trust and Control

Managing agentic systems requires an identity-centric approach grounded in visibility and context. Organizations need to systematically discover and classify AI components in operation, assess their security and compliance posture, and track how permissions change over time. Without this foundation, control remains largely theoretical.

The Information Security Media Group’s Cybersecurity Implications of AI Pulse Report 2025 reflects this shift, noting that AI adoption is reshaping both the threat surface and governance landscape across sectors. As autonomous systems take on more consequential roles, identity increasingly functions as the control layer through which accountability and oversight are maintained.

Strengthening Digital Trust Through Identity Governance

AI is becoming a significant source of privileged access within enterprises. Yet executive visibility into the scope, privilege levels, and authority of non-human accounts remains limited, particularly where governance models still treat identity as primarily human. As agentic systems proliferate, this imbalance between identity growth and oversight is likely to intensify unless governance models evolve accordingly.

Across the region, CIOs and CISOs broadly recognize that future identity growth will be driven more by autonomous systems than by workforce expansion. Governance therefore needs to advance alongside deployment to shape how AI identities are defined, constrained, and trusted.

Ultimately, the reliability of AI systems depends on the identity controls that govern their actions.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...