80% Governments to Use AI Agents by 2028: Gartner
Gartner predicts that at least 80% of governments will deploy AI agents by 2028 to automate routine decision-making, marking a major shift in public sector operations. This transition is expected to enhance efficiency, accuracy, and service delivery on a large scale.
Increasing Pressure for AI Integration
According to analyst Daniel Nieto, governments are under growing pressure to embed AI into decision-making processes. Advances in multimodal AI, conversational systems, and agentic frameworks are expanding the capabilities of public institutions to automate and anticipate various tasks.
Challenges to Adoption
Despite these advancements, adoption challenges remain. A Gartner survey found that 41% of government organizations face siloed strategies, while 31% struggle with legacy systems, which limit the effectiveness of their digital transformation efforts.
Transition to Decision Intelligence
A key insight from the report is the transition from traditional AI governance to Decision Intelligence (DI). Unlike earlier models that focused solely on data and algorithms, DI emphasizes how decisions are designed, executed, monitored, and audited. This approach is particularly critical for governments, where transparency and fairness directly impact public trust.
Mandating Explainable AI
Gartner predicts that by 2029, 70% of government agencies will mandate Explainable AI (XAI) and Human-in-the-Loop (HITL) mechanisms for all citizen-impacting decisions. This ensures that automated decisions can be reviewed, challenged, and corrected when necessary.
Focus on Citizen Trust and Experience
The findings indicate a fundamental transformation in public sector technology. Governments are shifting from a focus on process-driven systems to decision-driven ecosystems, where AI augments human judgment rather than replacing it. Approximately 50% of respondents identified improved citizen experience as a top priority.
As AI automates services, the direct interaction with government may decrease, making trust in these systems even more critical.
Balancing Automation with Accountability
Success in government AI will depend on balancing automation with accountability. Without strong governance, AI risks becoming opaque and eroding public trust. Ultimately, the future of government AI will not be defined by the extent of its automation but by how transparent, fair, and trustworthy its decisions are.