AI Adoption in Asia Pacific and Japan: A Study on Governance and Security Challenges
As organizations across Asia Pacific and Japan (APJ) rapidly increase their use of artificial intelligence, embedding AI tools and agents into business operations at scale, a new study reveals significant gaps in governance and security measures. According to data from a recent poll conducted by Okta during their Oktane on the Road events, the pace of AI adoption is outstripping the organizational readiness to manage the associated risks.
Key Findings from the AI Security Poll
The Okta security poll, conducted across Australia, Singapore, and Japan, highlights a troubling trend: as AI systems are deployed, there is a lack of clear accountability and governance structures to manage the risks tied to these technologies.
One of the most pressing issues identified is the inadequate security of non-human identities. In Australia, for instance, only 10% of respondents indicated their identity systems were fully equipped to secure AI agents, bots, and service accounts. In contrast, 52% claimed partial readiness, while a concerning 41% stated that no single individual or team is responsible for managing AI security.
Similar findings were observed in Singapore and Japan, where accountability often resides across multiple functions, leading to fragmented ownership of AI-related security risks.
The Rise of Shadow AI
The report also points to the emergence of “shadow AI”, defined as the use of unapproved or unsupervised AI tools within organizations. This is particularly concerning, as it was flagged as the top security threat in Australia (35%) and Singapore (33%), while data leakage was the primary concern in Japan (36%).
Monitoring and Detection Challenges
Another alarming finding is the lack of visibility into AI system behavior post-deployment. Less than one-third of respondents expressed confidence in their ability to detect when an AI agent operates outside its intended scope. Particularly low confidence levels were reported in Australia (18%) and Japan (8%).
Identity Systems: Not Ready for the AI Workforce
The poll further reveals that existing identity and access management frameworks are largely unprepared for the influx of non-human identities. Across all surveyed countries, fewer than 10% said their identity systems could manage and secure AI agents effectively. Most organizations described their capabilities as merely partially prepared.
This presents a significant structural challenge, as AI systems require appropriate credentials and permissions to interact with applications and data. Many identity systems have been designed primarily for human users, leading to situations where AI agents may have excessive access or lack sufficient audit trails.
Awareness vs. Engagement at the Board Level
Despite rising awareness of AI-related risks among senior leadership, there remains a gap in engagement. In Australia, 70% of boards are aware of these risks, but only 28% are fully engaged. Singapore shows 50% awareness with 31% engagement, while Japan leads with 78% awareness and 43% engagement, attributed to regulatory expectations and a focus on data integrity.
Conclusion: The Need for Stronger Governance
The findings underscore a critical imbalance between the rapid adoption of AI technologies and the organizational preparedness to govern them effectively. As AI agents become integral to operational workflows, organizations must establish clear accountability structures, improve visibility, and enhance identity controls.
While awareness of AI risks is growing, governance frameworks are still evolving and not consistently embedded across leadership structures. The results indicate that while AI adoption in APJ is accelerating, the necessary organizational controls and governance mechanisms are lagging behind, necessitating immediate attention to secure these advanced systems.