AI Trust: Balancing Benefits and Risks in a Complex Landscape

Global Study Finds AI Benefits, Risks Conflict, Governance Gap

A global study on trust in Artificial Intelligence (AI) reveals more than half of people globally are unwilling to trust AI, reflecting an underlying tension between its obvious benefits and perceived risks.

Key Findings

As we enter the intelligent age, it is noteworthy that 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits. However, trust, which is central to AI acceptance, remains a critical challenge. Only 46% of people globally are willing to trust AI systems, which correlates with low levels of AI literacy. Only 39% report some form of AI training, and merely 40% say their workplace has a policy or guidance on generative AI use.

Data suggests that just under half of organizations may be using AI without adequate support and governance.

Study Overview

The study titled “Trust, Attitudes and Use of Artificial Intelligence: A Global Study 2025” is the most comprehensive investigation into the public’s trust, use, and attitudes towards AI. Conducted between November 2024 and January 2025, it surveyed over 48,000 people across 47 countries.

Despite the high usage of AI, with 66% of respondents indicating they intentionally use AI regularly, less than half are willing to trust it (46%). This contrasts sharply with a previous study conducted before the release of ChatGPT in 2022, which indicated a higher level of trust.

Understanding AI

Individuals and organizations are more likely to trust AI systems when they understand how AI works. However, the study finds that only 39% report some form of AI training. In line with these low levels of AI training, almost 48% report limited knowledge about AI, indicating they do not feel they understand AI nor when or how it is used.

Professor Gillespie states, “The public’s trust of AI technologies and their safe and secure use is central to sustained acceptance and adoption.” Given the transformative effects of AI on society, work, education, and the economy, bringing the public voice into the conversation has never been more critical.

AI at Work and in Education

The age of working with AI is here, with 58% of employees intentionally using AI and a third (31%) using it weekly or daily. This high use is delivering a range of benefits, with most employees reporting increased efficiency, access to information, and innovation. Almost half of those surveyed report that AI has increased revenue-generating activity.

However, only 60% of organizations provide responsible AI training, and only 34% report an organizational policy or guidance on the use of generative AI tools. Raymond Campbell, Country Leader for KPMG in Caricom, notes, “The use of AI at work is creating complex risks for organizations, and a governance gap is emerging.” Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.

This lack of AI governance is also seen in educational institutions, only half of which have policies, resources, and training for responsible AI use in place.

AI in Society

Seventy-three percent of people report personally experiencing or observing benefits of AI, including reduced time spent on mundane tasks, enhanced personalization, reduced costs, and improved accessibility. However, four in five are also concerned about risks, and two in five report experiencing negative impacts of AI, ranging from a loss of human interaction to misinformation and disinformation.

Seventy percent believe AI regulation is required, yet only 43% believe existing laws and regulations are adequate. There is a clear public demand for international law and regulation and for the industry to partner with governments to mitigate these risks. 87% of respondents want stronger laws to combat AI-generated misinformation and expect media and social media companies to implement stronger fact-checking processes.

In the Caribbean, the emergence of AI across many industries is evident. Regional governments are grappling with policy development to safeguard data privacy while exploring the vast opportunities AI technology unlocks for human capital and economic development.

Chris Brome, Head of Advisory for KPMG in Caricom, emphasizes, “AI is surely the greatest technological innovation of our generation. Given its rapid advancement, it is imperative that AI systems are established on a foundation of good governance which will help to drive trust.”

“Users want assurance regarding the AI systems they interact with. Therefore, the complete potential of AI can only be realized if the public has confidence in the systems making decisions or assisting in them.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...