Balancing AI Benefits and Risks: Bridging the Governance Gap

Global Study Reveals Tension Between AI Benefits and Risks

A recent global study on trust in Artificial Intelligence (AI) has highlighted a significant gap between the benefits AI offers and the risks perceived by users. The findings indicate that while AI is widely adopted, a considerable portion of the population remains hesitant to trust these technologies.

Key Findings

The study reveals that the intelligent age is upon us, with 66% of people using AI regularly. A striking 83% of respondents believe that AI will lead to numerous benefits. However, trust remains a critical challenge, as only 46% of individuals globally are willing to trust AI systems. This lack of trust correlates with low levels of AI literacy, where only 39% report having received any form of AI training, and only 40% say their workplace has policies on generative AI use.

Furthermore, a demand for national and international regulation of AI is evident, with only 43% of respondents believing current regulations are adequate. Alarmingly, just under half of organizations may be utilizing AI without proper support and governance.

Understanding Trust in AI

The study, conducted with over 48,000 participants across 47 countries between November 2024 and January 2025, shows that despite 66% of people intentionally using AI, trust has diminished. Compared to a prior study conducted before the launch of ChatGPT in 2022, public trust in AI has decreased, indicating growing concerns as adoption increases.

Individuals and organizations tend to trust AI systems more when they have a better understanding of how AI operates. However, the study indicates that 39% of respondents lack adequate knowledge and training regarding AI.

AI in the Workplace and Education

The integration of AI into the workplace is substantial, with 58% of employees intentionally using AI and 31% using it on a daily or weekly basis. Many report benefits such as increased efficiency and enhanced access to information. Almost 50% of those surveyed claim that AI has positively impacted revenue-generating activities.

Despite these advantages, only 60% of organizations provide responsible AI training, and merely 34% have an organizational policy on generative AI. The study highlights a concerning trend where nearly half of employees admit to using AI in ways that violate company policies, such as inputting sensitive data into free public AI tools like ChatGPT. This poses significant risks to organizations as the governance of responsible AI usage lags behind.

Public Concerns and the Need for Regulation

Public sentiment reflects a duality regarding AI; while 73% of people acknowledge experiencing benefits, 80% express concerns about potential risks. Issues such as loss of human interaction, cybersecurity threats, misinformation, and negative impacts on job skills are prevalent. Notably, 70% of respondents advocate for stronger AI regulations, yet only 43% believe existing laws are sufficient.

The demand for comprehensive regulations is clear, with 87% of respondents wanting stricter laws to combat AI-generated misinformation and expecting media companies to enhance fact-checking processes.

Conclusion

This study illustrates the complex landscape of public trust in AI technologies, emphasizing the necessity for enhanced education, governance, and regulatory measures. As AI continues to evolve and integrate into various sectors, the dialogue surrounding its safe and secure usage must be prioritized to foster public trust and acceptance.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...