The Power of Putting AI Governance into Practice
For artificial intelligence in government to have a positive impact, it must be trusted by the employees using it and the citizens whose lives may be affected by it. AI governance is imperative to ensuring responsible use.
The Disconnect in AI Governance
There is a disconnect between embracing and governing AI within governments. A recent IDC Data and AI Impact Report found that while 78% of public sector organizations globally say they fully trust AI, only 40% have invested in the governance and safeguards needed to make that trust well-founded. This disconnect matters.
AI is no longer confined to pilot projects or back-office experimentation. It is increasingly embedded in everyday government workflows, automating work once performed manually by civil servants. We will increasingly have AI shaping actions and decisions that have real consequences for citizens.
Trust and Automation
Automation raises the stakes for trust. Can employees trust AI outputs enough to rely on them when making consequential decisions and taking actions? Can citizens trust their governments to use their data responsibly and deploy AI fairly?
AI governance exists to help organizations answer “yes” to those questions. It is the strategic and operational framework that ensures AI is trustworthy, ethical, and compliant. AI governance spans oversight, compliance, operations, and culture to provide the guardrails needed to manage AI responsibly across its entire lifecycle.
Beyond Compliance
When people hear about governance, they often think of regulations. But regulation isn’t the starting point for governments; trust is. Policy, in many cases, is the mechanism governments are now using to deliver trust at scale.
Kalliopi Spyridaki, SAS’s chief privacy strategist, notes that compliance is necessary, but governance must come first. One of the greatest misconceptions about AI governance is that it is synonymous with regulatory compliance.
True governance begins earlier, with clear accountability, risk classification, and transparency embedded into AI systems from their inception. This approach does not stifle innovation; it enables it by creating confidence among leaders, employees, regulators, and the public that AI can be used responsibly at scale.
Intentionality in AI Governance
In the public sector, the risks of AI range from personal harms, such as unfair benefit determinations, to systemic harms, such as erosion of trust in public institutions. AI governance mitigates these risks by establishing clear standards, accountability mechanisms, transparency requirements, and multidisciplinary oversight structures that include ethics, legal, and domain expertise.
Vrushali Sawant, a SAS data scientist and member of its data ethics practice, emphasizes intentionality as the backbone of trustworthy AI. “Intentionality means designing with purpose and accountability,” particularly in public sector environments where trust and equity are paramount.
Governments must begin by asking fundamental questions: Who benefits? Who could be harmed? Is AI the right tool for this problem? This ethical enquiry must persist throughout the AI lifecycle.
Embedding Responsible AI Principles
This intentionality extends into putting the principles of ‘responsible AI’ into practice by embedding them into AI systems. Ongoing monitoring, auditing, and remediation mechanisms ensure models remain aligned with policy, values, and public expectations.
Tools like Model Cards, described as “nutrition labels for AI,” play a critical role by documenting purpose, training data, fairness assessments, and limitations. Combined with audits and usage tracking, they transform governance from a checkbox into a living, visible practice that builds trust across technical and non-technical stakeholders alike.
While using responsible AI systems is essential, it is noted that governments often underestimate risks that are ethical, social, and operational, not just technical. These include fraudulent digital services, deepfake-driven misinformation, biased automated decisions, and attacks on public sector systems.
Addressing Shadow AI Risks
Shadow AI, which refers to unsanctioned use of AI tools or applications by employees or end users without approval or oversight of the IT department, poses major risks, including data leaks, intellectual property loss, and compliance breaches. Because these risks do not neatly sit within cybersecurity or IT domains, they require broader governance frameworks.
A centralized view of models, agents, and use cases becomes crucial to make AI visible, governable, and ready to scale. It helps identify shadow AI and supports a foundational AI governance approach that accelerates innovation.
Cultivating AI Literacy and Culture
Governance is as much about people as it is about technology. AI literacy is at the heart of building a culture of responsible innovation. Without a baseline understanding of how AI systems work and their limitations, it is difficult for government leaders, employees, and policymakers to procure, deploy, and oversee their AI.
Building a culture that understands AI and AI governance enables more informed decision-making, reduces risks, and builds greater trust in AI results. Yet many governments underinvest in educating their workforce in AI and AI governance relative to AI development.
The Need for Continuous Vigilance
As governments move toward greater automation, including AI agents and autonomous systems, the need for continuous vigilance has become even more pronounced. Trust does not emerge by accident; it is built intentionally through governance disciplines that evolve alongside technology.
Governments have important work to do to ensure their data and AI systems are as trustworthy as their employees and citizens need and expect them to be. The time to invest in AI governance is now.