AI in the Workplace: Balancing Benefits and Risks

Major Survey Finds Most People Use AI Regularly at Work – But Almost Half Admit to Doing So Inappropriately

In a recent global study of more than 32,000 workers from 47 countries, it was revealed that 58% of employees intentionally use artificial intelligence (AI) tools at work, with a third of these individuals utilizing AI on a weekly or daily basis. The findings indicate a significant shift towards AI adoption in the workplace, highlighting both the benefits and the risks associated with its use.

Productivity and Performance Benefits

Many employees report that their use of AI tools has led to real productivity and performance benefits. Key improvements noted include:

  • Efficiency: 67%
  • Information Access: 61%
  • Innovation: 59%
  • Work Quality: 58%

These findings align with prior research that demonstrates AI’s potential to drive significant productivity gains for both employees and organizations. Notably, general-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than those provided by their employers.

Inappropriate Use of AI

Despite the advantages, nearly 47% of surveyed employees admit to using AI in ways that could be considered inappropriate. A troubling 63% have observed others doing the same. Examples of risky behaviors include:

  • Uploading sensitive information into public AI tools.
  • Relying on AI-generated answers without verification.
  • Hiding the use of AI from employers.

The handling of sensitive company information is a major concern, with 48% of employees admitting to uploading sensitive data into public generative AI tools. Furthermore, 44% acknowledge using AI in ways that violate organizational policies.

Complacency and Risks

The study also found that complacency in using AI is widespread, with 66% of respondents relying on AI outputs without proper evaluation. Consequently, 56% have made mistakes in their work due to AI, highlighting the risks associated with inadequate oversight.

Younger employees (aged 18-34) are particularly likely to engage in both inappropriate and complacent use of AI, raising concerns about potential repercussions for organizations and employees alike. Such mistakes have led to documented cases of financial loss, reputational damage, and privacy breaches.

‘Shadow’ AI Use

Another alarming trend is the prevalence of shadow AI use, where employees do not disclose their use of AI tools. The survey found that:

  • 61% of employees avoid revealing when they use AI.
  • 55% present AI-generated content as their own.
  • 66% use AI tools without knowing if such use is permitted.

This lack of transparency complicates risk management and hampers organizations’ ability to detect and mitigate potential issues. A noticeable gap in governance exists, with only 34% of employees reporting that their organization has a policy guiding the use of generative AI tools.

Need for Training and Governance

To address these challenges, organizations must invest in responsible AI training and develop employees’ AI literacy. The study suggests that self-reported AI literacy predicts not only whether employees adopt AI tools but also their ability to engage with them critically. Training improves how individuals verify AI outputs and understand their limitations.

Despite the importance of AI literacy, less than 47% of employees report having received any training or education related to AI use. Clear policies, guidelines, and accountability measures are essential for fostering a safe and responsible AI environment.

Creating a Supportive Culture

Beyond training, cultivating a psychologically safe work environment is crucial. Employees should feel comfortable sharing how and when they use AI tools. Such a culture not only promotes better oversight and risk management but also encourages a spirit of shared learning and experimentation.

AI has the potential to significantly enhance workplace productivity and innovation. However, achieving this requires a literate workforce, rigorous governance, and an organizational culture that supports transparent and accountable AI use. Without these components, AI risks becoming a liability rather than an asset.

More Insights

Responsible AI Workflows for Transforming UX Research

The article discusses how AI can transform UX research by improving efficiency and enabling deeper insights, while emphasizing the importance of human oversight to avoid biases and inaccuracies. It...

Revolutionizing Banking with Agentic AI

Agentic AI is transforming the banking sector by automating complex processes, enhancing customer experiences, and ensuring regulatory compliance. However, it also introduces challenges related to...

AI-Driven Compliance: The Future of Scalable Crypto Infrastructure

The explosive growth of the crypto industry has brought about numerous regulatory challenges, making AI-native compliance systems essential for scalability and operational efficiency. These systems...

ASEAN’s Evolving AI Governance Landscape

The Association of Southeast Asian Nations (ASEAN) is making progress toward AI governance through an innovation-friendly approach, but growing AI-related risks highlight the need for more binding...

EU AI Act vs. US AI Action Plan: A Risk Perspective

Dr. Cari Miller discusses the differences between the EU AI Act and the US AI Action Plan, highlighting that the EU framework is much more risk-aware and imposes binding obligations on high-risk AI...

The Hidden Risks of AI Integration in the Workplace

As organizations rush to adopt AI, many are ignoring the critical risks involved, such as compliance and oversight issues. Without proper governance and human management, AI can quickly become a...

Investing in AI Safety: Capitalizing on the Future of Responsible Innovation

The AI safety collaboration imperative is becoming essential as the artificial intelligence revolution reshapes industries and daily life. Investors are encouraged to capitalize on this opportunity by...

AI Innovations in Modern Policing

Law enforcement agencies are increasingly leveraging artificial intelligence to enhance their operations, particularly in predictive policing. The integration of technology offers immense potential...

Kenya’s Pivotal Role in UN’s Groundbreaking AI Governance Agreement

Kenya has achieved a significant diplomatic success by leading the establishment of two landmark institutions for governing artificial intelligence (AI) at the United Nations. The Independent...