Global Trust in Generative AI Rises Amid AI Governance Gaps

Trust In Generative AI Surges Globally Despite Gaps In AI Safeguards

SAS, a global leader in data and AI, has unveiled new research that explores the use, impact, and trustworthiness of AI. The IDC Data and AI Impact Report: The Trust Imperative, commissioned by SAS, found that IT and business leaders report having greater trust in generative AI than any other form of AI.

Trust vs. Investment in AI

The global research exploring AI use and adoption also found that only 40% are investing to make AI systems trustworthy through governance, explainability, and ethical safeguards. This is particularly striking considering that organizations prioritizing trustworthy AI are 60% more likely to double the ROI of AI projects. Paradoxically, among those reporting the least investment in trustworthy AI systems, generative AI (e.g., ChatGPT) was viewed as 200% more trustworthy than traditional AI (e.g., machine learning), despite the latter being the most established, reliable, and explainable form of AI.

“Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy,” said Kathy Lange, Research Director of the AI and Automation Practice at IDC. “As AI providers, professionals, and personal users, we must ask: Generative AI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?”

Emerging Technologies and Trust Levels

The research draws on a global survey of 2,375 respondents conducted across North America, Latin America, Europe, the Middle East and Africa, and Asia Pacific. Participants included a balanced mix of IT professionals and line-of-business leaders, offering perspectives from both technology and business functions.

Overall, the study found that the most trusted AI deployments were emerging technologies, like generative AI and agentic AI, over more established forms of AI. Almost half of respondents (48%) reported “complete trust” in generative AI, while a third said the same for agentic AI (33%). The least trusted form of AI is traditional AI, with less than one in five (18%) indicating complete trust.

Even as they reported high trust in generative AI and agentic AI, survey respondents expressed concerns, including data privacy (62%), transparency and explainability (57%), and ethical use (56%).

Quantum AI Gaining Trust

Meanwhile, quantum AI is picking up confidence quickly, even as the technology to execute most use cases has yet to be fully realized. Almost a third of global decision-makers say they are familiar with quantum AI, and 26% report complete trust in the technology, despite real-world applications still in the early stages.

Challenges in Trustworthy AI Implementation

The study showed a rapid rise in AI usage—particularly generative AI—which has quickly eclipsed traditional AI in both visibility and application (81% vs. 66%). This has sparked a new level of risks and ethical concerns.

Across all regions, IDC researchers identified a misalignment in how much organizations trust AI versus how trustworthy the technology truly is. Per the study, while nearly 8 in 10 (78%) organizations claim to fully trust AI, only 40% have invested to make systems demonstrably trustworthy through AI governance, explainability, and ethical safeguards.

The research also showed a low priority placed on implementing trustworthy AI measures when operationalizing AI projects. Among respondents’ top three organizational priorities, only 2% selected developing an AI governance framework, and less than 10% reported developing a responsible AI policy. However, deprioritizing trustworthy AI measures may be preventing these organizations from fully realizing their AI investments down the road.

The Importance of Data Foundations

As AI systems become more autonomous and deeply integrated into critical processes, data foundations also become more important. The quality, diversity, and governance of data directly influence AI outcomes, making smart data strategies essential to realizing benefits (e.g., ROI, productivity gains) and mitigating risks.

The study identified three major hurdles preventing success with AI implementations: weak data infrastructure, poor governance, and a lack of AI skills. Nearly 49% of organizations cite data foundations that are not centralized or non-optimized cloud data environments as a major barrier. This top concern was followed by a lack of sufficient data governance processes (44%) and a shortage of skilled specialists within their organization (41%).

Respondents reported the number one issue with managing the data used in AI implementations to be difficulty in accessing relevant data sources (58%). Other leading concerns included data privacy and compliance issues (49%) and data quality (46%).

Conclusion

“For the good of society, businesses, and employees – trust in AI is imperative,” said Bryan Harris, Chief Technology Officer at SAS. “In order to achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI.”

For more detailed insights, interactive dashboards are available for further exploration of the survey results.

SAS Innovate 2026 – a one-of-a-kind experience for business leaders, technical users, and SAS partners – is coming April 27–30, 2026, in Grapevine, Texas. Visit the SAS Innovate website for more information and to save the date!

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...