As Enterprise AI Use Deepens, New Research Highlights the Urgent Need for Data Governance
Research reveals growing gaps between AI experimentation and risk patterns as usage deepens across business workflows.
Introduction
MOUNTAIN VIEW, Calif., Feb. 5, 2026 /PRNewswire/ — Enterprise use of AI is expanding rapidly across development, operations, and knowledge work. However, new research shows that these changes in behavior are creating new data risks that legacy technology can’t see or govern. The 2026 AI Adoption & Risk Report, released today by Cyberhaven Labs, offers a clear-eyed look at how enterprises are using AI and why governance and data security must be prepared for these changes.
“What this research makes clear is that enterprise AI adoption isn’t just accelerating, it’s fragmenting,” said Nishant Doshi, CEO of Cyberhaven. “A small set of teams is moving fast and embedding AI deeply into daily work, while security and governance are often playing catch-up. As organizations plan for 2026 and beyond, the risk isn’t AI itself; it’s not understanding how AI is actually being used. Without visibility into which tools are in play, what data is flowing through them, and where controls need to adapt, enterprises risk widening the gap between innovation and trust.”
Key Findings
1. An AI Adoption Gap is Emerging
AI adoption and use is not unfolding as a steady, industry-wide wave. Instead, it is becoming increasingly polarized.
- A widening gap is emerging between AI early adopters and organizations that remain hesitant to embrace these technologies.
- The top 1% of early adopter organizations use more than 300 GenAI tools.
- In contrast, cautious enterprises typically employ fewer than 15 GenAI tools.
2. Most GenAI SaaS Tools Are Objectively Risky
Most AI use today occurs in tools that do not meet traditional enterprise risk standards, yet employees continue to enter sensitive data into them at high rates.
- Across the top 100 most-used GenAI SaaS applications, 82% are classified as “medium,” “high,” or “critical” risk.
- Cyberhaven Labs data shows that 32.3% of ChatGPT usage occurs through personal accounts, as does 24.9% of Gemini usage.
- 39.7% of all data movements into AI tools involve sensitive data, including prompts or copy-paste actions.
This behavior significantly limits organizational visibility into AI usage and data flows.
3. Coding Assistants and AI Agents Are Becoming the “Second Wave” of Workplace AI
AI coding assistants (such as Cursor, GitHub Copilot, and Claude Code) continued to grow steadily through 2025.
- In companies leading in AI adoption, nearly 90% of developers use these tools, whereas in a typical organization, adoption is closer to 50%.
- On the other side of the spectrum, only 6% of developers use AI coding assistants, illustrating the growing gap of AI adoption.
- In the later months of 2025, 30% of developers using AI coding assistants reported using at least two.
Conclusion
As enterprise AI adoption continues to accelerate, the Cyberhaven Labs 2026 AI Adoption & Risk Report underscores a widening divide between innovation and oversight. AI adoption is becoming uneven across organizations, teams, and workflows, with the highest levels often occurring in environments with the least mature governance and visibility.
“AI is no longer a side experiment for most enterprises; it’s becoming a core part of the infrastructure,” added Doshi. “Organizations that succeed will be those that move beyond one-size-fits-all policies and invest in security approaches that reflect real usage patterns. By bringing visibility, context, and control together, enterprises can enable teams to innovate with AI while maintaining trust, compliance, and resilience as adoption continues to evolve.”
Explore the full report findings in detail.