Global Trust in Generative AI Rises Amid AI Governance Gaps

Trust In Generative AI Surges Globally Despite Gaps In AI Safeguards

SAS, a global leader in data and AI, has unveiled new research that explores the use, impact, and trustworthiness of AI. The IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, found that IT and business leaders report having greater trust in generative AI than any other form of AI.

Trust vs. Investment in AI

The global research exploring AI use and adoption also found that only 40% are investing to make AI systems trustworthy through governance, explainability, and ethical safeguards. This is particularly striking considering that organizations prioritizing trustworthy AI are 60% more likely to double the ROI of AI projects. Paradoxically, among those reporting the least investment in trustworthy AI systems, generative AI (e.g., ChatGPT) was viewed as 200% more trustworthy than traditional AI (e.g., machine learning), despite the latter being the most established, reliable, and explainable form of AI.

“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, Research Director of the AI and Automation Practice at IDC. “As AI providers, professionals, and personal users, we must ask: Generative AI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?”

Emerging Technologies and Trust Levels

The research draws on a global survey of 2,375 respondents conducted across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants included a balanced mix of IT professionals and line-of-business leaders, offering perspectives from both technology and business functions.

Overall, the study found that the most trusted AI deployments were emerging technologies, like generative AI and agentic AI, over more established forms of AI. Almost half of respondents (48%) reported “complete trust” in generative AI, while a third said the same for agentic AI (33%). The least trusted form of AI is traditional AI, with less than one in five (18%) indicating complete trust.

Even as they reported high trust in generative AI and agentic AI, survey respondents expressed concerns, including data privacy (62%), transparency and explainability (57%), and ethical use (56%).

Quantum AI Gaining Trust

Meanwhile, quantum AI is picking up confidence quickly, even as the technology to execute most use cases has yet to be fully realized. Almost a third of global decision-makers say they are familiar with quantum AI, and 26% report complete trust in the technology, despite real-world applications still in the early stages.

Challenges in Trustworthy AI Implementation

The study showed a rapid rise in AI usage—particularly generative AI—which has quickly eclipsed traditional AI in both visibility and application (81% vs. 66%). This has sparked a new level of risks and ethical concerns.

Across all regions, IDC researchers identified a misalignment in how much organizations trust AI versus how trustworthy the technology truly is. Per the study, while nearly 8 in 10 (78%) organizations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through AI governance, explainability, and ethical safeguards.

The research also showed a low priority placed on implementing trustworthy AI measures when operationalizing AI projects. Among respondents’ top three organizational priorities, only 2% selected developing an AI governance framework, and less than 10% reported developing a responsible AI policy. However, deprioritizing trustworthy AI measures may be preventing these organizations from fully realizing their AI investments down the road.

The Importance of Data Foundations

As AI systems become more autonomous and deeply integrated into critical processes, data foundations also become more important. The quality, diversity, and governance of data directly influence AI outcomes, making smart data strategies essential to realizing benefits (e.g., ROI, productivity gains) and mitigating risks.

The study identified three major hurdles preventing success with AI implementations: weak data infrastructure, poor governance, and a lack of AI skills. Nearly 49% of organizations cite data foundations that are not centralized or non-optimized cloud data environments as a major barrier. This top concern was followed by a lack of sufficient data governance processes (44%) and a shortage of skilled specialists within their organization (41%).

Respondents reported the number one issue with managing the data used in AI implementations to be difficulty in accessing relevant data sources (58%). Other leading concerns included data privacy and compliance issues (49%) and data quality (46%).

Conclusion

“For the good of society, businesses, and employees – trust in AI is imperative,” said Bryan Harris, Chief Technology Officer at SAS. “In order to achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI.”

For more detailed insights, interactive dashboards are available for further exploration of the survey results.

SAS Innovate 2026 – a one-of-a-kind experience for business leaders, technical users, and SAS partners – is coming April 27–30, 2026, in Grapevine, Texas. Visit the SAS Innovate website for more information and to save the date!

More Insights

Shaping the Future of AI: Balancing Innovation and Responsibility

AI has become central to product design and business strategy, with governments and companies striving to protect people while enabling growth. The challenge lies in balancing regulatory compliance...

Ontario Tech Unveils Canada’s First School of Ethical AI

Ontario Tech University has launched Canada’s first and only School of Ethical Artificial Intelligence (SEAI), emphasizing the importance of AI governance in the modern era. This initiative aims to...

EU’s Struggle for Teen AI Safety Amid Corporate Promises

OpenAI and Meta have introduced new parental controls and safety measures for their AI chatbots to protect teens from mental health risks, responding to concerns raised by incidents involving AI...

EU AI Act: Transforming Global AI Standards

The EU AI Act introduces a risk-based regulatory framework for artificial intelligence, categorizing systems by their potential harm and imposing strict compliance requirements on high-risk...

Empowering Government Innovation with AI Sandboxes

In 2023, California launched a generative artificial intelligence sandbox, allowing state employees to experiment with AI integration in public sector operations. This initiative has been recognized...

Global Trust in Generative AI Rises Amid AI Governance Gaps

A recent study by SAS reveals that trust in generative AI is higher than in traditional AI, with nearly half of respondents expressing complete trust in GenAI. However, only 40% of organizations are...

Kazakhstan’s Digital Revolution: Embracing AI and Crypto Transformation

Kazakhstan is undergoing a significant transformation by prioritizing artificial intelligence and digitalization as part of its national strategy, aiming to shift away from its reliance on raw...

California’s Pioneering AI Safety and Transparency Legislation

California has enacted the nation's first comprehensive AI Safety and Transparency Act, signed into law by Governor Gavin Newsom on September 29, 2025. This landmark legislation aims to establish a...

Rethinking AI Innovation: Beyond Competition to Collaboration

The relentless pursuit of artificial intelligence is reshaping our world, challenging our ethics, and redefining what it means to be human. As the pace of AI innovation accelerates without a clear...