Bridging the AI Security Gap: Insights from the 2026 Risk Management Report

The Purple Book Community Releases New Research, ‘State of AI Risk Management 2026’

SAN FRANCISCO—(BUSINESS WIRE)—RSAC 2026 — The Purple Book Community (PBC), a global community of senior security leaders, in partnership with ArmorCode, has released the State of AI Risk Management 2026. This report is based on a survey of over 650 senior enterprise cybersecurity leaders across North America and Europe, revealing a growing gap between perceived AI security readiness and the operational blind spots caused by shadow AI and new vulnerabilities introduced by AI-driven development.

According to the research, 90% of enterprises claim to have visibility into their AI footprint, yet 59% have confirmed or suspect the presence of shadow AI within their environments. This suggests that employees are using unsanctioned AI tools or deploying agentic AI systems outside established monitoring and governance processes.

Critical Timing for AI Governance

This research is timely as enterprises rapidly implement AI across development and business workflows, often outpacing the ability of security and governance frameworks to adapt.

The study also found that 70% of organizations have confirmed or suspect vulnerabilities introduced by AI-generated code in their production systems, underscoring how the speed of AI-assisted development exceeds traditional security review cycles.

The gap between visibility and control is identified as one of the most pressing challenges in enterprise AI security today. Sangram Dash, a PBC Charter Member, emphasizes that the greatest AI security threat is not what organizations cannot see but rather what they can see but cannot govern quickly enough to mitigate.

Key Findings from the Report

The research identifies several key trends shaping enterprise AI security:

  • Shadow AI is Becoming the Norm: More than 59% of security leaders confirm or suspect that employees are using AI tools that have not been approved by IT or security teams, indicating that decentralized AI adoption is outpacing governance processes.
  • AI-Generated Code is Accelerating Risk Exposure: Nearly three-quarters (73%) of organizations state that AI-assisted development is increasing software velocity beyond the pace at which security teams can review, leading to widespread AI-generated vulnerabilities in production.
  • Tool Fragmentation is Weakening Security Posture: Over half (51%) of enterprises utilize 11 or more security scanning and vulnerability management tools, resulting in siloed insights and operational complexity that complicate risk prioritization.
  • Security Teams are Drowning in Noise: Almost half (46%) of respondents report spending significant time triaging vulnerabilities that ultimately do not matter, while critical issues remain obscured across disconnected tools.

Together, these issues contribute to what the report describes as the “confidence gap”, the widening distance between perceived AI security readiness and the operational reality of governing AI at enterprise scale.

AI Adoption Surges While Governance Struggles

The research confirms that AI-assisted development has become mainstream within enterprise software teams. Nearly three-quarters (73%) of organizations report extensive AI usage within their development processes, while 78% are piloting or deploying agentic AI systems capable of taking autonomous action.

As AI systems expand to agents acting on behalf of organizations, the governance challenge will grow significantly. Without stronger oversight and unified visibility into risks across applications, cloud, infrastructure, and AI systems, enterprises risk further widening the gap between vulnerability awareness and control.

Karthik Swarnam, Chief Security and Trust Officer at ArmorCode, points out that the real challenge lies not in AI adoption itself but in the governance required to manage it responsibly at enterprise scale. While visibility into AI is improving, the volume and speed of change are outpacing operational capabilities.

Research Methodology

The State of AI Risk Management 2026 surveyed over 650 cybersecurity decision-makers, including CISOs, VPs of Security, and security directors across various industries such as software, financial services, healthcare, manufacturing, and retail. Respondents represent organizations with 1,000 to more than 20,000 employees across North America and Europe.

Conducted between December 2025 and February 2026, the commissioned research reflects respondent perceptions at a specific point in time and may not fully represent all organizational environments.

About The Purple Book Community

The Purple Book Community (PBC) is a global network of over 1,000 cybersecurity leaders and practitioners dedicated to democratizing software security and addressing its evolving challenges in an AI-powered world through peer collaboration.

In the five years since its inception, PBC has established itself as a respected group, bringing together CISOs and leaders across application, product, infrastructure, and AI security, as well as academics and analysts globally.

Members meet monthly to discuss key topics ranging from secure AI adoption to regulatory compliance and building security program maturity. PBC’s Centers of Excellence convene focus groups of senior leaders to raise awareness of challenges, define best practices, and generate free resources for the cybersecurity community.

To learn more about the PBC, visit their official website.

About ArmorCode

ArmorCode’s Agentic AI Platform aids enterprises in managing security risks across diverse environments. Powered by Anya, the first agentic AI framework for enterprise security, it unifies exposure management across various domains, providing visibility, insight, and control without replacing existing tools.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...