Accelerating Agentic AI Adoption Through Effective Governance

Governance Helps Agentic AI Move Faster Inside Companies

Enthusiasm leads to execution, which in turn leads to results. This equation often marks the integration of technology within enterprises, and agentic AI is no exception. Recent dynamics observed in The Prompt Economy demonstrate that companies are enthusiastic and beginning to execute, establishing systems to yield tangible results.

The Gap Between Enthusiasm and Organizational Readiness

A new report from Harvard Business Review Analytic Services reveals that enthusiasm for agentic AI significantly outpaces organizational readiness. Most executives anticipate that agentic AI will transform their businesses, with many believing it will become a standard within their industries. Early adopters are already experiencing productivity gains and improved decision-making. However, the report highlights that real-world usage remains limited for many organizations, with only a minority deploying agentic AI at scale. This gap is not about a lack of belief in the technology; rather, it is rooted in inadequate preparation.

The report indicates that while data foundations are improving, critical areas such as governance, workforce skills, and clear success measures lag behind. Few organizations have defined what success looks like or how to manage risks when AI systems operate with greater autonomy. Leaders making strides in this area tend to focus on practical use cases, invest in workforce readiness, and align agentic AI initiatives with their business strategy. The conclusion is clear: agentic AI can deliver meaningful value, but only for organizations willing to rethink processes, invest in their people, and implement robust governance frameworks prior to scaling.

Formal Governance Frameworks: A Case Study from Singapore

Governance can also be mandated. According to Computer Weekly, Singapore has introduced what is described as the world’s first formal governance framework specifically designed for agentic AI. Announced at the World Economic Forum in Davos by the country’s minister for digital development and information, the framework aims to assist organizations in deploying AI agents capable of planning, deciding, and acting with minimal human intervention.

Developed by the Infocomm Media Development Authority (IMDA), the framework builds upon Singapore’s previous AI governance initiatives but shifts its focus from generative AI to systems that can perform real-world actions, such as updating databases or processing payments. The goal is to balance productivity gains with safeguards against new operational and security risks.

This framework outlines practical steps for enterprises, including the establishment of clear limits on AI agents’ autonomy, specifying when human approval is necessary, and monitoring systems throughout their lifecycle. It also identifies risks such as unauthorized actions and automation bias, where users may place excessive trust in systems that have previously performed well. Industry leaders have welcomed this initiative, emphasizing the need for clear rules as agentic AI begins to influence decisions with real-world consequences. The IMDA is treating the framework as a living document, seeking feedback from companies for ongoing refinement.

Challenges of Identity Management in AI Adoption

Another report cautions that enterprises are rapidly adopting agentic AI while lagging in governance and security measures. Executives from Accenture and Okta indicate that while most companies already utilize AI agents for everyday business tasks, very few have established effective oversight mechanisms. According to Okta, over 90% of organizations are using AI agents, yet only a small fraction believes they have strong governance strategies in place.

Accenture’s research corroborates this imbalance, showing widespread use of AI agents without clear plans for managing the risks they introduce. The core challenge highlighted by the report is that AI agents are increasingly functioning as digital employees without being managed accordingly. These agents require access to systems, data, and workflows, creating new risks if their identities and permissions are not clearly defined.

The authors recommend treating AI agents as formal digital identities, emphasizing the need for clear rules around authentication, access, monitoring, and lifecycle management. Without this structured approach, organizations risk creating unmanaged identity sprawl, potentially transforming agentic AI from a productivity enhancer into a significant security and compliance issue.

In summary, the report asserts, “Agents need their own identity. Once you accept that, everything else flows — access control, governance, auditing, and compliance.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...