AI in the Workplace: Balancing Benefits and Risks

Major Survey Finds Most People Use AI Regularly at Work – But Almost Half Admit to Doing So Inappropriately

In a recent global study of more than 32,000 workers from 47 countries, it was revealed that 58% of employees intentionally use artificial intelligence (AI) tools at work, with a third of these individuals utilizing AI on a weekly or daily basis. The findings indicate a significant shift towards AI adoption in the workplace, highlighting both the benefits and the risks associated with its use.

Productivity and Performance Benefits

Many employees report that their use of AI tools has led to real productivity and performance benefits. Key improvements noted include:

  • Efficiency: 67%
  • Information Access: 61%
  • Innovation: 59%
  • Work Quality: 58%

These findings align with prior research that demonstrates AI’s potential to drive significant productivity gains for both employees and organizations. Notably, general-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than those provided by their employers.

Inappropriate Use of AI

Despite the advantages, nearly 47% of surveyed employees admit to using AI in ways that could be considered inappropriate. A troubling 63% have observed others doing the same. Examples of risky behaviors include:

  • Uploading sensitive information into public AI tools.
  • Relying on AI-generated answers without verification.
  • Hiding the use of AI from employers.

The handling of sensitive company information is a major concern, with 48% of employees admitting to uploading sensitive data into public generative AI tools. Furthermore, 44% acknowledge using AI in ways that violate organizational policies.

Complacency and Risks

The study also found that complacency in using AI is widespread, with 66% of respondents relying on AI outputs without proper evaluation. Consequently, 56% have made mistakes in their work due to AI, highlighting the risks associated with inadequate oversight.

Younger employees (aged 18-34) are particularly likely to engage in both inappropriate and complacent use of AI, raising concerns about potential repercussions for organizations and employees alike. Such mistakes have led to documented cases of financial loss, reputational damage, and privacy breaches.

‘Shadow’ AI Use

Another alarming trend is the prevalence of shadow AI use, where employees do not disclose their use of AI tools. The survey found that:

  • 61% of employees avoid revealing when they use AI.
  • 55% present AI-generated content as their own.
  • 66% use AI tools without knowing if such use is permitted.

This lack of transparency complicates risk management and hampers organizations’ ability to detect and mitigate potential issues. A noticeable gap in governance exists, with only 34% of employees reporting that their organization has a policy guiding the use of generative AI tools.

Need for Training and Governance

To address these challenges, organizations must invest in responsible AI training and develop employees’ AI literacy. The study suggests that self-reported AI literacy predicts not only whether employees adopt AI tools but also their ability to engage with them critically. Training improves how individuals verify AI outputs and understand their limitations.

Despite the importance of AI literacy, less than 47% of employees report having received any training or education related to AI use. Clear policies, guidelines, and accountability measures are essential for fostering a safe and responsible AI environment.

Creating a Supportive Culture

Beyond training, cultivating a psychologically safe work environment is crucial. Employees should feel comfortable sharing how and when they use AI tools. Such a culture not only promotes better oversight and risk management but also encourages a spirit of shared learning and experimentation.

AI has the potential to significantly enhance workplace productivity and innovation. However, achieving this requires a literate workforce, rigorous governance, and an organizational culture that supports transparent and accountable AI use. Without these components, AI risks becoming a liability rather than an asset.

More Insights

US Rejects UN’s Call for Global AI Governance Framework

U.S. officials rejected the establishment of a global AI governance framework at the United Nations General Assembly, despite broad support from many nations, including China. Michael Kratsios of the...

Agentic AI: Managing the Risks of Autonomous Systems

As companies increasingly adopt agentic AI systems for autonomous decision-making, they face the emerging challenge of agentic AI sprawl, which can lead to security vulnerabilities and operational...

AI as a New Opinion Gatekeeper: Addressing Hidden Biases

As large language models (LLMs) become increasingly integrated into sectors like healthcare and finance, a new study highlights the potential for subtle biases in AI systems to distort public...

AI Accountability: A New Era of Regulation and Compliance

The burgeoning world of Artificial Intelligence (AI) is at a critical juncture as regulatory actions signal a new era of accountability and ethical deployment. Recent events highlight the shift...

Choosing Effective AI Governance Tools for Safer Adoption

As generative AI continues to evolve, so do the associated risks, making AI governance tools essential for managing these challenges. This initiative, in collaboration with Tokio Marine Group, aims to...

UN Initiatives for Trustworthy AI Governance

The United Nations is working to influence global policy on artificial intelligence by establishing an expert panel to develop standards for "safe, secure and trustworthy" AI. This initiative aims to...

Data-Driven Governance: Shaping AI Regulation in Singapore

The conversation between Thomas Roehm from SAS and Frankie Phua from United Overseas Bank at the SAS Innovate On Tour in Singapore explores how data-driven regulation can effectively govern rapidly...

Preparing SMEs for EU AI Compliance Challenges

Small and medium-sized enterprises (SMEs) must navigate the complexities of the EU AI Act, which categorizes many AI applications as "high-risk" and imposes strict compliance requirements. To adapt...

Draft Guidance on Reporting Serious Incidents Under the EU AI Act

On September 26, 2025, the European Commission published draft guidance on serious incident reporting requirements for high-risk AI systems under the EU AI Act. Organizations developing or deploying...