Strengthening Data Governance in the Age of AI

Zero-Trust Data Governance Needed to Protect AI Models from Sloppiness

Organizations need to be less trusting of data given how much of it is AI-generated, according to new research from Gartner.

As more enterprises jump on board the generative AI train — a recent Gartner survey found that 84% of organizations expect to spend more on it this year — the risk grows that future large language models (LLMs) will be trained on outputs from previous models, increasing the danger of so-called model collapse.

Recommendations for Risk Management

To avoid this, Gartner recommends companies make changes to manage the risk of unverified data. These include:

  • Appointment of an AI governance leader to work closely with data and analytics teams.
  • Improvement of collaboration between departments with cross-functional groups including representatives from cybersecurity, data, and analytics.
  • Updating existing security and data management policies to address risks from AI-generated data.

The Future of Data Governance

Gartner predicts that by 2028, 50% of organizations will have had to adopt a zero-trust posture for data governance as a result of the tidal wave of unverified AI-generated data.

“Organizations can no longer implicitly trust data or assume it was human generated,” stated Gartner managing VP Wan Fui Chan. “As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures is essential to safeguard business and financial outcomes.”

Geopolitical Considerations

What makes matters even trickier to handle is that there will be different approaches to AI from governments. Requirements may differ significantly across geographies, with some jurisdictions seeking to enforce stricter controls on AI-generated content, while others may adopt a more flexible approach.

Case Study: Deloitte Australia

Perhaps the best example of how AI can cause data governance issues was when Deloitte Australia had to refund part of a government contract fee after AI-generated errors, including non-existent legal citations, were included in its final report.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...