AI Adoption Outpaces Governance in Asia Pacific and Japan

AI Adoption in Asia Pacific and Japan: A Study on Governance and Security Challenges

As organizations across Asia Pacific and Japan (APJ) rapidly increase their use of artificial intelligence, embedding AI tools and agents into business operations at scale, a new study reveals significant gaps in governance and security measures. According to data from a recent poll conducted by Okta during their Oktane on the Road events, the pace of AI adoption is outstripping the organizational readiness to manage the associated risks.

Key Findings from the AI Security Poll

The Okta security poll, conducted across Australia, Singapore, and Japan, highlights a troubling trend: as AI systems are deployed, there is a lack of clear accountability and governance structures to manage the risks tied to these technologies.

One of the most pressing issues identified is the inadequate security of non-human identities. In Australia, for instance, only 10% of respondents indicated their identity systems were fully equipped to secure AI agents, bots, and service accounts. In contrast, 52% claimed partial readiness, while a concerning 41% stated that no single individual or team is responsible for managing AI security.

Similar findings were observed in Singapore and Japan, where accountability often resides across multiple functions, leading to fragmented ownership of AI-related security risks.

The Rise of Shadow AI

The report also points to the emergence of “shadow AI”, defined as the use of unapproved or unsupervised AI tools within organizations. This is particularly concerning, as it was flagged as the top security threat in Australia (35%) and Singapore (33%), while data leakage was the primary concern in Japan (36%).

Monitoring and Detection Challenges

Another alarming finding is the lack of visibility into AI system behavior post-deployment. Less than one-third of respondents expressed confidence in their ability to detect when an AI agent operates outside its intended scope. Particularly low confidence levels were reported in Australia (18%) and Japan (8%).

Identity Systems: Not Ready for the AI Workforce

The poll further reveals that existing identity and access management frameworks are largely unprepared for the influx of non-human identities. Across all surveyed countries, fewer than 10% said their identity systems could manage and secure AI agents effectively. Most organizations described their capabilities as merely partially prepared.

This presents a significant structural challenge, as AI systems require appropriate credentials and permissions to interact with applications and data. Many identity systems have been designed primarily for human users, leading to situations where AI agents may have excessive access or lack sufficient audit trails.

Awareness vs. Engagement at the Board Level

Despite rising awareness of AI-related risks among senior leadership, there remains a gap in engagement. In Australia, 70% of boards are aware of these risks, but only 28% are fully engaged. Singapore shows 50% awareness with 31% engagement, while Japan leads with 78% awareness and 43% engagement, attributed to regulatory expectations and a focus on data integrity.

Conclusion: The Need for Stronger Governance

The findings underscore a critical imbalance between the rapid adoption of AI technologies and the organizational preparedness to govern them effectively. As AI agents become integral to operational workflows, organizations must establish clear accountability structures, improve visibility, and enhance identity controls.

While awareness of AI risks is growing, governance frameworks are still evolving and not consistently embedded across leadership structures. The results indicate that while AI adoption in APJ is accelerating, the necessary organizational controls and governance mechanisms are lagging behind, necessitating immediate attention to secure these advanced systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...