AI Oversight Crisis: Survey Reveals Gaps in Governance Amid Rapid Adoption

Shadow AI Governance Lags as AI Adoption Soars

This is a paid press release. Contact the press release distributor directly with any inquiries.

Introduction

A recent global survey conducted by Cato Networks has unveiled alarming gaps in AI oversight among organizations. The study, which included over 600 IT leaders from North America, EMEA, and APJ, revealed that a staggering 69 percent of respondents lack a formal tracking system to monitor AI adoption.

Key Findings

The survey highlights a troubling disconnect between AI usage and governance. Notably, 61 percent of participants reported finding unauthorized AI tools within their environments. However, only 26 percent of organizations have implemented solutions to monitor AI usage. Nearly 49 percent either do not track AI usage at all or respond to it reactively.

Implications of AI Adoption

According to Etay Maor, chief security strategist at Cato Networks, “In many enterprises, AI adoption is happening from the bottom up.” Employees tend to gravitate towards AI tools that they find comfortable, believing they provide a productivity edge. Yet, without proper visibility and governance, enterprises inadvertently expand their attack surface.

AI Security Preparedness Falls Short

The research further exposes significant flaws in the way enterprises approach AI adoption. A remarkable 71 percent of respondents indicated that their primary motivation for adopting AI is to enhance productivity and efficiency. However, 69 percent admitted they lack a monitoring system for AI usage.

Most enterprises remain unaware of the AI tools their employees are utilizing, the data being shared, and the potential compliance risks that may arise. Alarmingly, only 13 percent of respondents consider their organization’s management of shadow AI risks to be ‘highly effective’. Furthermore, less than 10 percent believe they have a ‘highly effective’ defense against AI-generated cyber threats such as deepfakes and prompt injection attacks.

The Dangers of Shadow AI

Shadow AI operates similarly to shadow IT, where unauthorized technology is adopted to address immediate challenges. However, the associated risks concerning data processing, model training, and the absence of clear AI decision-making present substantial security concerns. IT leaders are acutely aware of the implications; 53 percent express high or extreme concern regarding AI security risks.

Conclusion

Maor emphasizes the urgency of the situation: “It is not a question of whether there is shadow AI usage within an enterprise, but whether you have the ability to detect it, govern it, and secure it before an issue arises.” The findings from this research indicate a pressing need for enterprises to take swift action to gain visibility and control over their AI usage.

As AI adoption continues to soar, the importance of robust governance frameworks cannot be overstated. Organizations must prioritize monitoring capabilities to mitigate the risks associated with shadow AI, ensuring they remain secure in an increasingly AI-driven landscape.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...