Bridging the AI Security and Governance Gap

OpenText Study Warns of AI Security & Governance Gap

OpenText has published research in collaboration with the Ponemon Institute, revealing a significant gap in security and governance measures as organizations increasingly deploy generative AI technologies. According to the survey, 52% of enterprises have fully or partially implemented generative AI (GenAI).

Adoption vs. Oversight

The findings underscore a troubling disconnect between the rapid adoption of AI and the corresponding oversight as companies expand their use of AI in cybersecurity and other critical operations. The study surveyed 1,878 IT and IT security practitioners across multiple regions, including North America, Asia-Pacific, Europe, the Middle East, Africa, and Latin America, covering various sectors such as financial services, healthcare, technology, energy, and manufacturing.

Governance Gap

Alarmingly, only 20% of enterprises have achieved what the study classifies as AI maturity in cybersecurity, meaning AI is fully deployed in security activities with risks adequately assessed. Nearly 79% of organizations have not reached this critical stage.

Policy adoption remains limited, with only 41% of organizations having specific AI data privacy policies in place. Furthermore, 43% have adopted a risk-based governance approach addressing issues like bias, security threats, and ethical concerns.

Implementation Challenges

The report indicates that the pace of AI implementation is outpacing internal controls. Almost 59% of respondents noted that AI complicates compliance with privacy and security regulations, yet most organizations lack dedicated privacy rules for AI systems.

Operational concerns are also prevalent; 58% of respondents expressed that managing prompt or input risks, such as misleading or harmful responses, is highly challenging. More than 56% reported difficulties in managing user risks, including the inadvertent spread of misinformation.

Trust Issues with AI

The study questioned whether AI systems are delivering the anticipated benefits in security operations. Only 51% of respondents indicated that AI effectively reduces the time required to detect anomalies or emerging threats, while fewer than 48% rated AI as effective for advanced tasks like threat detection and hunting for deeper insights.

Bias and reliability remain significant obstacles. Nearly 62% of respondents stated that minimizing model and bias risks, including unfair or discriminatory outputs, is extremely difficult. Other barriers include errors in AI decision rules (cited by 45% of respondents) and inaccuracies in the data fed into AI systems (highlighted by 40%).

Future of Autonomous AI

The concept of fully autonomous AI appears distant for many businesses. Only 47% of participants indicated that their AI models can learn robust norms and make safe, autonomous decisions, while 51% believe human oversight is essential in AI governance due to the rapid adaptability of attackers.

Building a Responsible AI Framework

Muhi Majzoub, EVP of Product & Engineering at OpenText, emphasized that the challenge transcends mere adoption. “AI maturity isn't just about adopting AI tools—it’s about doing it responsibly,” he stated. He stressed that security and governance must be integrated early into AI systems. “When they're built into AI systems from the start, organizations can operate with greater transparency, continuously monitor systems, and trust the outcomes AI delivers,” he added.

Regional Insights

The survey’s cross-regional sample was designed to capture insights from executives, decision-makers, and practitioners involved in IT security, engineering, infrastructure, risk, and compliance—areas intricately linked to AI and security strategy.

As AI tools become more embedded in daily operations and critical business processes, the survey suggests that implementation is advancing faster than the necessary controls to manage risks. Majzoub concluded, “The leaders in this next phase of AI adoption will be those who build transparency and control into AI from the start.” Organizations must prioritize secure information management, clear governance frameworks, policy-based controls, and continuous monitoring to maintain trustworthiness and compliance in AI systems.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...