Balancing AI Benefits and Risks: Bridging the Governance Gap

Global Study Reveals Tension Between AI Benefits and Risks

A recent global study on trust in Artificial Intelligence (AI) has highlighted a significant gap between the benefits AI offers and the risks perceived by users. The findings indicate that while AI is widely adopted, a considerable portion of the population remains hesitant to trust these technologies.

Key Findings

The study reveals that the intelligent age is upon us, with 66% of people using AI regularly. A striking 83% of respondents believe that AI will lead to numerous benefits. However, trust remains a critical challenge, as only 46% of individuals globally are willing to trust AI systems. This lack of trust correlates with low levels of AI literacy, where only 39% report having received any form of AI training, and only 40% say their workplace has policies on generative AI use.

Furthermore, a demand for national and international regulation of AI is evident, with only 43% of respondents believing current regulations are adequate. Alarmingly, just under half of organizations may be utilizing AI without proper support and governance.

Understanding Trust in AI

The study, conducted with over 48,000 participants across 47 countries between November 2024 and January 2025, shows that despite 66% of people intentionally using AI, trust has diminished. Compared to a prior study conducted before the launch of ChatGPT in 2022, public trust in AI has decreased, indicating growing concerns as adoption increases.

Individuals and organizations tend to trust AI systems more when they have a better understanding of how AI operates. However, the study indicates that 39% of respondents lack adequate knowledge and training regarding AI.

AI in the Workplace and Education

The integration of AI into the workplace is substantial, with 58% of employees intentionally using AI and 31% using it on a daily or weekly basis. Many report benefits such as increased efficiency and enhanced access to information. Almost 50% of those surveyed claim that AI has positively impacted revenue-generating activities.

Despite these advantages, only 60% of organizations provide responsible AI training, and merely 34% have an organizational policy on generative AI. The study highlights a concerning trend where nearly half of employees admit to using AI in ways that violate company policies, such as inputting sensitive data into free public AI tools like ChatGPT. This poses significant risks to organizations as the governance of responsible AI usage lags behind.

Public Concerns and the Need for Regulation

Public sentiment reflects a duality regarding AI; while 73% of people acknowledge experiencing benefits, 80% express concerns about potential risks. Issues such as loss of human interaction, cybersecurity threats, misinformation, and negative impacts on job skills are prevalent. Notably, 70% of respondents advocate for stronger AI regulations, yet only 43% believe existing laws are sufficient.

The demand for comprehensive regulations is clear, with 87% of respondents wanting stricter laws to combat AI-generated misinformation and expecting media companies to enhance fact-checking processes.

Conclusion

This study illustrates the complex landscape of public trust in AI technologies, emphasizing the necessity for enhanced education, governance, and regulatory measures. As AI continues to evolve and integrate into various sectors, the dialogue surrounding its safe and secure usage must be prioritized to foster public trust and acceptance.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...