AI in the Workplace: Balancing Benefits and Risks

Major Survey Finds Most People Use AI Regularly at Work – But Almost Half Admit to Doing So Inappropriately

In a recent global study of more than 32,000 workers from 47 countries, it was revealed that 58% of employees intentionally use artificial intelligence (AI) tools at work, with a third of these individuals utilizing AI on a weekly or daily basis. The findings indicate a significant shift towards AI adoption in the workplace, highlighting both the benefits and the risks associated with its use.

Productivity and Performance Benefits

Many employees report that their use of AI tools has led to real productivity and performance benefits. Key improvements noted include:

  • Efficiency: 67%
  • Information Access: 61%
  • Innovation: 59%
  • Work Quality: 58%

These findings align with prior research that demonstrates AI’s potential to drive significant productivity gains for both employees and organizations. Notably, general-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than those provided by their employers.

Inappropriate Use of AI

Despite the advantages, nearly 47% of surveyed employees admit to using AI in ways that could be considered inappropriate. A troubling 63% have observed others doing the same. Examples of risky behaviors include:

  • Uploading sensitive information into public AI tools.
  • Relying on AI-generated answers without verification.
  • Hiding the use of AI from employers.

The handling of sensitive company information is a major concern, with 48% of employees admitting to uploading sensitive data into public generative AI tools. Furthermore, 44% acknowledge using AI in ways that violate organizational policies.

Complacency and Risks

The study also found that complacency in using AI is widespread, with 66% of respondents relying on AI outputs without proper evaluation. Consequently, 56% have made mistakes in their work due to AI, highlighting the risks associated with inadequate oversight.

Younger employees (aged 18-34) are particularly likely to engage in both inappropriate and complacent use of AI, raising concerns about potential repercussions for organizations and employees alike. Such mistakes have led to documented cases of financial loss, reputational damage, and privacy breaches.

‘Shadow’ AI Use

Another alarming trend is the prevalence of shadow AI use, where employees do not disclose their use of AI tools. The survey found that:

  • 61% of employees avoid revealing when they use AI.
  • 55% present AI-generated content as their own.
  • 66% use AI tools without knowing if such use is permitted.

This lack of transparency complicates risk management and hampers organizations’ ability to detect and mitigate potential issues. A noticeable gap in governance exists, with only 34% of employees reporting that their organization has a policy guiding the use of generative AI tools.

Need for Training and Governance

To address these challenges, organizations must invest in responsible AI training and develop employees’ AI literacy. The study suggests that self-reported AI literacy predicts not only whether employees adopt AI tools but also their ability to engage with them critically. Training improves how individuals verify AI outputs and understand their limitations.

Despite the importance of AI literacy, less than 47% of employees report having received any training or education related to AI use. Clear policies, guidelines, and accountability measures are essential for fostering a safe and responsible AI environment.

Creating a Supportive Culture

Beyond training, cultivating a psychologically safe work environment is crucial. Employees should feel comfortable sharing how and when they use AI tools. Such a culture not only promotes better oversight and risk management but also encourages a spirit of shared learning and experimentation.

AI has the potential to significantly enhance workplace productivity and innovation. However, achieving this requires a literate workforce, rigorous governance, and an organizational culture that supports transparent and accountable AI use. Without these components, AI risks becoming a liability rather than an asset.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...