The Wild West of AI? Why Data Governance Matters More Than Ever
In an era where the federal government has opted to restrict state mandates and regulations governing the use of AI tools, the importance of information governance remains unchanged. The recent executive order has curtailed the ability of individual states to regulate AI tools, creating an environment ripe for faster innovation and broader deployment. However, deregulating these tools does not equate to deregulating the data.
Public sector leaders are now navigating a regulatory paradox: a strong federal push for AI adoption, paired with limited guidance on safety, accountability, and long-term risk. In this climate, responsibility does not disappear; instead, it shifts. The burden of risk increasingly falls on information owners responsible for how data is collected, governed, retained, and ultimately made available to automated systems.
The Gatekeeper’s Mandates
The regulatory gap created by the recent executive order does not eliminate accountability; it relocates it. As AI tools move faster into production environments, the quality, governance, and stewardship of the data feeding those systems becomes the primary line of defense against legal, ethical, and operational risk.
The 2025 AI Action Plan identifies high-quality data as a national strategic asset. This designation elevates records and data professionals from compliance stewards to central actors in responsible AI adoption. Decisions about what data is collected, how it is classified, how long it is retained, and who can access it now directly shape whether AI systems are explainable, defensible, and trustworthy.
Governance Priorities in an AI-Enabled Environment
Agencies looking to deploy AI responsibly should focus on four governance priorities:
- Enforce Data Minimization
AI systems are designed to consume large volumes of data, but effective governance requires restraint. Only data strictly necessary for a defined, mission-specific purpose should be collected or ingested. Minimization reduces attack surfaces, limits the blast radius of potential breaches, and simplifies compliance obligations.
- Implement “Need-to-Keep” Retention Policies
Data retention must be active, intentional, and defensible. Clear retention periods should be established not only for records but also for AI training data, prompts, outputs, and user interactions. When data no longer serves a verified legal, operational, or mission purpose, it should be defensibly destroyed.
- Demand Privacy-Preserving Techniques
Before approving AI tools, agencies should rigorously evaluate the privacy architecture behind them. Techniques such as anonymization and differential privacy are no longer optional safeguards. These approaches are especially critical as agencies explore secondary uses of data beyond its original collection context.
- Mandate Human-in-the-Loop Oversight
Algorithms are powerful but lack judgment, context, and accountability. Strong information governance extends beyond securing data to validating how AI-driven outputs are used. High-stakes decisions, particularly those affecting citizen services, should never rely solely on automated systems.
The Bottom Line
The legal landscape may be shifting, but the ethical imperative remains constant. Agencies that prioritize strong information governance do more than reduce compliance risk; they create the conditions under which AI can be deployed responsibly, scaled sustainably, and trusted by the public it is meant to serve.