AI Risk and Readiness Report 2026
For most organizations, this is the year AI becomes infrastructure. Agents now execute actions autonomously: modifying records, creating accounts, and pushing code through API calls that complete before any human reviews them. This reality makes every AI deployment a security risk, regardless of whether organizations treat it as one or not.
The security stacks found in most organizations today were built for a different world: one where humans were the only actors, processes were deterministic, data stayed in recognizable forms, and trust was verified at the browser. That world no longer exists.
Key Findings
- AI tools are now deployed at 73% of organizations surveyed, but governance that enforces security and policy in real time has reached only 7%. This leaves a 66-point structural deficit, which is widening as AI adoption accelerates faster than controls.
- 90% increased AI security budgets this year, yet 29% feel less secure than twelve months ago. The problem is outpacing the investment.
- 94% of respondents report gaps in AI activity visibility. 88% cannot distinguish personal AI accounts from corporate instances. Only 6% claim to see the full scope of their organization’s AI pipeline.
- DLP matches patterns while AI transforms meaning; only 8% have controls that evaluate content semantically, regardless of how it has been rewritten.
- AI agents have write access to collaboration tools (53%), email (40%), code repositories (25%), and identity providers (8%). 91% of organizations only discover what an agent did after it has executed the action.
- 31% rely on written policies and employee compliance as their primary enforcement. Another 11% have nothing at all. Only 23% enforce AI security inline, at the point of action.
AI Governance Lags Far Behind Adoption
Twelve months ago, most organizations treated AI governance as a future priority: something to formalize once adoption stabilized. Adoption didn’t wait. Copilots, code-completion tools, and content generators shipped into production across departments, and by the time security had a framework in place, the AI footprint was already operational.
Today, 68% of organizations describe their AI governance as reactive or still developing. Only 7% have reached advanced maturity with real-time policy enforcement. The 66-point gap between the 73% deploying AI tools and the 7% governing them in real time is a structural mismatch—organizations are building at production speed on a security and compliance foundation that barely exists.
As a result, 39% have already experienced an AI-related near-miss involving unintended data exposure. Of those, 17% changed nothing afterward. The governance conversation isn’t just behind schedule; in many organizations, it never started.
More Budget, Less Confidence
Paradoxically, the AI governance gap exists despite organizations investing more than ever in security. 90% increased AI security spending this year, with nearly a third raising budgets by more than 25%, yet 29% report feeling less secure than twelve months ago.
Research participants explained why: 34% see the biggest barrier coming from business pressure to adopt AI faster than security can follow. Skill gaps came second at 25%, and legacy tools that cannot interpret AI-specific threats ranked third at 21%. Budget challenges placed fourth at 14%.
Most AI Activity Is Invisible to Security
You cannot secure what you cannot see. Only 6% of organizations report complete visibility into AI usage across their environment. 45% have partial visibility limited to managed applications, blind to anything outside of authorized tools. 35% see only network-level traffic patterns, while 14% have no visibility at all.
Even where detection exists, distinguishing what matters remains difficult. 88% cannot reliably tell personal AI accounts from corporate instances on the same platform, the #1 technical blind spot in the survey. When a security team cannot tell whether an employee is using an authorized AI tenant or a personal account with no data governance, DLP policies, access controls, and audit trails all become unreliable.
AI Renders Legacy DLP Powerless
Even where organizations can see AI activity, the primary tool tasked with catching data in motion was designed for a fundamentally different kind of movement. DLP was built to find specific patterns: credit card formats, Social Security number sequences, regex matches against known sensitive content.
While DLP may block the upload or the copy/paste of sensitive data into prompts by looking for these patterns, if AI gets hold of the data, it will rephrase sensitive content—retaining its meaning—while discarding its original digital fingerprint.
AI Agents Run Unsupervised
While data leakage through AI tools is the risk most organizations recognize, the deeper exposure is that AI systems are now acting on their own, with many operating in shadow mode outside security’s view.
The survey quantifies how far this has spread: 56% report real agentic AI risk exposure. 32% have zero visibility into agent actions, and 36% are blind to M2M AI traffic entirely.
Closing the Execution Gap
The Capability Maturity Model shows where the gaps are. The following summarizes the most useful actions to improve across each risk vector:
- Close AI Visibility Gaps: Expand activity-level monitoring across SaaS, API, and M2M traffic.
- Translate Policy Into Enforceable Guardrails: Identify high-risk AI use cases and embed enforceable policies.
- Deploy Semantic Data Protection: Evaluate controls against content transformation tests.
- Enforce Before Execution: Audit agents with write access and establish approval gates.
- Modernize Detection and Containment: Define what anomalous looks like for agent actions.
- Reduce Control Fragmentation: Unify CASB, DLP, and access policy for cohesive enforcement.
AI security is now an operational discipline. The maturity dimensions are mapped, the dependency sequence is clear, and the actions are concrete. What remains is the decision to build.