Zero-Trust Data Governance Needed to Protect AI Models from Sloppiness
Organizations need to be less trusting of data given how much of it is AI-generated, according to new research from Gartner.
As more enterprises jump on board the generative AI train — a recent Gartner survey found that 84% of organizations expect to spend more on it this year — the risk grows that future large language models (LLMs) will be trained on outputs from previous models, increasing the danger of so-called model collapse.
Recommendations for Risk Management
To avoid this, Gartner recommends companies make changes to manage the risk of unverified data. These include:
- Appointment of an AI governance leader to work closely with data and analytics teams.
- Improvement of collaboration between departments with cross-functional groups including representatives from cybersecurity, data, and analytics.
- Updating existing security and data management policies to address risks from AI-generated data.
The Future of Data Governance
Gartner predicts that by 2028, 50% of organizations will have had to adopt a zero-trust posture for data governance as a result of the tidal wave of unverified AI-generated data.
“Organizations can no longer implicitly trust data or assume it was human generated,” stated Gartner managing VP Wan Fui Chan. “As AI-generated data becomes pervasive and indistinguishable from human-created data, a zero-trust posture establishing authentication and verification measures is essential to safeguard business and financial outcomes.”
Geopolitical Considerations
What makes matters even trickier to handle is that there will be different approaches to AI from governments. Requirements may differ significantly across geographies, with some jurisdictions seeking to enforce stricter controls on AI-generated content, while others may adopt a more flexible approach.
Case Study: Deloitte Australia
Perhaps the best example of how AI can cause data governance issues was when Deloitte Australia had to refund part of a government contract fee after AI-generated errors, including non-existent legal citations, were included in its final report.