Governance Helps Agentic AI Move Faster Inside Companies
Enthusiasm leads to execution, which in turn leads to results. This equation often marks the integration of technology within enterprises, and agentic AI is no exception. Recent dynamics observed in The Prompt Economy demonstrate that companies are enthusiastic and beginning to execute, establishing systems to yield tangible results.
The Gap Between Enthusiasm and Organizational Readiness
A new report from Harvard Business Review Analytic Services reveals that enthusiasm for agentic AI significantly outpaces organizational readiness. Most executives anticipate that agentic AI will transform their businesses, with many believing it will become a standard within their industries. Early adopters are already experiencing productivity gains and improved decision-making. However, the report highlights that real-world usage remains limited for many organizations, with only a minority deploying agentic AI at scale. This gap is not about a lack of belief in the technology; rather, it is rooted in inadequate preparation.
The report indicates that while data foundations are improving, critical areas such as governance, workforce skills, and clear success measures lag behind. Few organizations have defined what success looks like or how to manage risks when AI systems operate with greater autonomy. Leaders making strides in this area tend to focus on practical use cases, invest in workforce readiness, and align agentic AI initiatives with their business strategy. The conclusion is clear: agentic AI can deliver meaningful value, but only for organizations willing to rethink processes, invest in their people, and implement robust governance frameworks prior to scaling.
Formal Governance Frameworks: A Case Study from Singapore
Governance can also be mandated. According to Computer Weekly, Singapore has introduced what is described as the world’s first formal governance framework specifically designed for agentic AI. Announced at the World Economic Forum in Davos by the country’s minister for digital development and information, the framework aims to assist organizations in deploying AI agents capable of planning, deciding, and acting with minimal human intervention.
Developed by the Infocomm Media Development Authority (IMDA), the framework builds upon Singapore’s previous AI governance initiatives but shifts its focus from generative AI to systems that can perform real-world actions, such as updating databases or processing payments. The goal is to balance productivity gains with safeguards against new operational and security risks.
This framework outlines practical steps for enterprises, including the establishment of clear limits on AI agents’ autonomy, specifying when human approval is necessary, and monitoring systems throughout their lifecycle. It also identifies risks such as unauthorized actions and automation bias, where users may place excessive trust in systems that have previously performed well. Industry leaders have welcomed this initiative, emphasizing the need for clear rules as agentic AI begins to influence decisions with real-world consequences. The IMDA is treating the framework as a living document, seeking feedback from companies for ongoing refinement.
Challenges of Identity Management in AI Adoption
Another report cautions that enterprises are rapidly adopting agentic AI while lagging in governance and security measures. Executives from Accenture and Okta indicate that while most companies already utilize AI agents for everyday business tasks, very few have established effective oversight mechanisms. According to Okta, over 90% of organizations are using AI agents, yet only a small fraction believes they have strong governance strategies in place.
Accenture’s research corroborates this imbalance, showing widespread use of AI agents without clear plans for managing the risks they introduce. The core challenge highlighted by the report is that AI agents are increasingly functioning as digital employees without being managed accordingly. These agents require access to systems, data, and workflows, creating new risks if their identities and permissions are not clearly defined.
The authors recommend treating AI agents as formal digital identities, emphasizing the need for clear rules around authentication, access, monitoring, and lifecycle management. Without this structured approach, organizations risk creating unmanaged identity sprawl, potentially transforming agentic AI from a productivity enhancer into a significant security and compliance issue.
In summary, the report asserts, “Agents need their own identity. Once you accept that, everything else flows — access control, governance, auditing, and compliance.”