OpenText Study Warns of AI Security & Governance Gap
OpenText has published research in collaboration with the Ponemon Institute, revealing a significant gap in security and governance measures as organizations increasingly deploy generative AI technologies. According to the survey, 52% of enterprises have fully or partially implemented generative AI (GenAI).
Adoption vs. Oversight
The findings underscore a troubling disconnect between the rapid adoption of AI and the corresponding oversight as companies expand their use of AI in cybersecurity and other critical operations. The study surveyed 1,878 IT and IT security practitioners across multiple regions, including North America, Asia-Pacific, Europe, the Middle East, Africa, and Latin America, covering various sectors such as financial services, healthcare, technology, energy, and manufacturing.
Governance Gap
Alarmingly, only 20% of enterprises have achieved what the study classifies as AI maturity in cybersecurity, meaning AI is fully deployed in security activities with risks adequately assessed. Nearly 79% of organizations have not reached this critical stage.
Policy adoption remains limited, with only 41% of organizations having specific AI data privacy policies in place. Furthermore, 43% have adopted a risk-based governance approach addressing issues like bias, security threats, and ethical concerns.
Implementation Challenges
The report indicates that the pace of AI implementation is outpacing internal controls. Almost 59% of respondents noted that AI complicates compliance with privacy and security regulations, yet most organizations lack dedicated privacy rules for AI systems.
Operational concerns are also prevalent; 58% of respondents expressed that managing prompt or input risks, such as misleading or harmful responses, is highly challenging. More than 56% reported difficulties in managing user risks, including the inadvertent spread of misinformation.
Trust Issues with AI
The study questioned whether AI systems are delivering the anticipated benefits in security operations. Only 51% of respondents indicated that AI effectively reduces the time required to detect anomalies or emerging threats, while fewer than 48% rated AI as effective for advanced tasks like threat detection and hunting for deeper insights.
Bias and reliability remain significant obstacles. Nearly 62% of respondents stated that minimizing model and bias risks, including unfair or discriminatory outputs, is extremely difficult. Other barriers include errors in AI decision rules (cited by 45% of respondents) and inaccuracies in the data fed into AI systems (highlighted by 40%).
Future of Autonomous AI
The concept of fully autonomous AI appears distant for many businesses. Only 47% of participants indicated that their AI models can learn robust norms and make safe, autonomous decisions, while 51% believe human oversight is essential in AI governance due to the rapid adaptability of attackers.
Building a Responsible AI Framework
Muhi Majzoub, EVP of Product & Engineering at OpenText, emphasized that the challenge transcends mere adoption. “AI maturity isn't just about adopting AI tools—it’s about doing it responsibly,” he stated. He stressed that security and governance must be integrated early into AI systems. “When they're built into AI systems from the start, organizations can operate with greater transparency, continuously monitor systems, and trust the outcomes AI delivers,” he added.
Regional Insights
The survey’s cross-regional sample was designed to capture insights from executives, decision-makers, and practitioners involved in IT security, engineering, infrastructure, risk, and compliance—areas intricately linked to AI and security strategy.
As AI tools become more embedded in daily operations and critical business processes, the survey suggests that implementation is advancing faster than the necessary controls to manage risks. Majzoub concluded, “The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start.” Organizations must prioritize secure information management, clear governance frameworks, policy-based controls, and continuous monitoring to maintain trustworthiness and compliance in AI systems.