AI in the Workplace: Balancing Benefits and Risks

Major Survey Finds Most People Use AI Regularly at Work – But Almost Half Admit to Doing So Inappropriately

In a recent global study of more than 32,000 workers from 47 countries, it was revealed that 58% of employees intentionally use artificial intelligence (AI) tools at work, with a third of these individuals utilizing AI on a weekly or daily basis. The findings indicate a significant shift towards AI adoption in the workplace, highlighting both the benefits and the risks associated with its use.

Productivity and Performance Benefits

Many employees report that their use of AI tools has led to real productivity and performance benefits. Key improvements noted include:

  • Efficiency: 67%
  • Information Access: 61%
  • Innovation: 59%
  • Work Quality: 58%

These findings align with prior research that demonstrates AI’s potential to drive significant productivity gains for both employees and organizations. Notably, general-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than those provided by their employers.

Inappropriate Use of AI

Despite the advantages, nearly 47% of surveyed employees admit to using AI in ways that could be considered inappropriate. A troubling 63% have observed others doing the same. Examples of risky behaviors include:

  • Uploading sensitive information into public AI tools.
  • Relying on AI-generated answers without verification.
  • Hiding the use of AI from employers.

The handling of sensitive company information is a major concern, with 48% of employees admitting to uploading sensitive data into public generative AI tools. Furthermore, 44% acknowledge using AI in ways that violate organizational policies.

Complacency and Risks

The study also found that complacency in using AI is widespread, with 66% of respondents relying on AI outputs without proper evaluation. Consequently, 56% have made mistakes in their work due to AI, highlighting the risks associated with inadequate oversight.

Younger employees (aged 18-34) are particularly likely to engage in both inappropriate and complacent use of AI, raising concerns about potential repercussions for organizations and employees alike. Such mistakes have led to documented cases of financial loss, reputational damage, and privacy breaches.

‘Shadow’ AI Use

Another alarming trend is the prevalence of shadow AI use, where employees do not disclose their use of AI tools. The survey found that:

  • 61% of employees avoid revealing when they use AI.
  • 55% present AI-generated content as their own.
  • 66% use AI tools without knowing if such use is permitted.

This lack of transparency complicates risk management and hampers organizations’ ability to detect and mitigate potential issues. A noticeable gap in governance exists, with only 34% of employees reporting that their organization has a policy guiding the use of generative AI tools.

Need for Training and Governance

To address these challenges, organizations must invest in responsible AI training and develop employees’ AI literacy. The study suggests that self-reported AI literacy predicts not only whether employees adopt AI tools but also their ability to engage with them critically. Training improves how individuals verify AI outputs and understand their limitations.

Despite the importance of AI literacy, less than 47% of employees report having received any training or education related to AI use. Clear policies, guidelines, and accountability measures are essential for fostering a safe and responsible AI environment.

Creating a Supportive Culture

Beyond training, cultivating a psychologically safe work environment is crucial. Employees should feel comfortable sharing how and when they use AI tools. Such a culture not only promotes better oversight and risk management but also encourages a spirit of shared learning and experimentation.

AI has the potential to significantly enhance workplace productivity and innovation. However, achieving this requires a literate workforce, rigorous governance, and an organizational culture that supports transparent and accountable AI use. Without these components, AI risks becoming a liability rather than an asset.

More Insights

Building Trust in AI: Strategies for a Secure Future

The Digital Trust Summit 2025 highlighted the urgent need for organizations to embed trust, fairness, and transparency into AI systems from the outset. As AI continues to evolve, strong governance and...

Rethinking Cloud Governance for AI Innovation

As organizations embrace AI innovations, they often overlook the need for updated cloud governance models that can keep pace with rapid advancements. Effective governance should be proactive and...

AI Governance: A Guide for Board Leaders

The Confederation of Indian Industry (CII) has released a guidebook aimed at helping company boards responsibly adopt and govern Artificial Intelligence (AI) technologies. The publication emphasizes...

Harnessing AI for Secure DevSecOps in a Zero-Trust Environment

The article discusses the implications of AI-powered automation in DevSecOps, highlighting the balance between efficiency and the risks associated with reliance on AI in security practices. It...

Establishing India’s First Centre for AI, Law & Regulation

Cyril Amarchand Mangaldas, Cyril Shroff, and O.P. Jindal Global University have announced the establishment of the Cyril Shroff Centre for AI, Law & Regulation, the first dedicated centre in India...

Revolutionizing AI Governance for Local Agencies with a Free Policy Tool

Darwin has launched its AI Policy Wizard, a free and interactive tool designed to assist local governments and public agencies in creating customized AI policies. The tool simplifies the process by...

Building Trust in AI Through Effective Governance

Ulla Coester emphasizes the importance of adaptable governance in building trust in AI, highlighting that unclear threats complicate global confidence in the technology. She advocates for...

Building Trustworthy AI Through Cultural Engagement

This report emphasizes the importance of inclusive AI governance to ensure diverse voices, especially from the Global South, are involved in AI access and development decisions. It highlights the...

AI Compliance: Copyright Challenges in the EU AI Act

The EU AI Act emphasizes the importance of copyright compliance for generative AI models, particularly regarding the use of vast datasets for training. It requires general-purpose AI providers to...