Singapore’s Framework for Responsible AI: Ensuring Human Oversight in Agentic Systems

Agentic AI Gets a Rules Framework: Singapore Insists Humans Stay in Charge

In response to rapid advancements in artificial intelligence (AI), particularly in the deployment of agentic AI, the Singapore government has introduced a comprehensive framework aimed at mitigating risks associated with this technology. The Infocomm Media Development Authority (IMDA) released its Model AI Governance Framework for Agentic AI, emphasizing the importance of human oversight in the workplace.

The New Framework

This newly launched framework builds upon Singapore’s previous Model AI Governance Framework from 2019, which primarily focused on principles like transparency, fairness, and human-centricity. Concerns surrounding AI, particularly the evolution towards artificial general intelligence (AGI), have prompted this updated approach, as highlighted by insights from industry leaders.

Singapore stands out for its practical implementation of AI in government services, having successfully deployed virtual assistants like ‘Ask Jamie’, which has handled over 15 million queries since its inception in 2014.

Human Oversight is Mandatory

The framework underscores that while agents may operate autonomously, human responsibility remains paramount. Organizations are encouraged to define clear accountability structures and ensure effective human oversight over time, despite automation biases. The framework also calls for organizations to implement technical safeguards, including testing for safety and continuous monitoring.

Evaluating AI Agents

Research from McKinsey & Co. estimates that agentic AI systems could unlock between $2.6 trillion to $4.4 trillion annually across various sectors, including customer service and compliance. The evaluation of AI agents is a complex field that includes different types of agents:

  • Coding agents that can write or test code.
  • Conversational agents useful for support and coaching.
  • Research agents that gather and analyze information.
  • Computer use agents that interact with software similarly to humans.

Limiting an Agent’s Powers

Singapore’s guidelines mandate that organizations employing agentic AI limit the powers of these agents to mitigate risks effectively. Policies should ensure that agents only have access to the minimal tools and data necessary for their tasks. For instance, a coding assistant should not require web search access if it has the latest software documentation available.

Effective identity management and access control are also emphasized, ensuring traceability and accountability as these agents become more autonomous.

Conclusion

As organizations increasingly integrate agentic AI into their workflows, adhering to frameworks like Singapore’s Model AI Governance Framework is crucial. By ensuring human oversight and implementing necessary safeguards, the potential of AI can be harnessed responsibly, paving the way for a more efficient and transparent future.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...