AI in the Workplace: Balancing Benefits and Risks

Major Survey Finds Most People Use AI Regularly at Work – But Almost Half Admit to Doing So Inappropriately

In a recent global study of more than 32,000 workers from 47 countries, it was revealed that 58% of employees intentionally use artificial intelligence (AI) tools at work, with a third of these individuals utilizing AI on a weekly or daily basis. The findings indicate a significant shift towards AI adoption in the workplace, highlighting both the benefits and the risks associated with its use.

Productivity and Performance Benefits

Many employees report that their use of AI tools has led to real productivity and performance benefits. Key improvements noted include:

  • Efficiency: 67%
  • Information Access: 61%
  • Innovation: 59%
  • Work Quality: 58%

These findings align with prior research that demonstrates AI’s potential to drive significant productivity gains for both employees and organizations. Notably, general-purpose generative AI tools, such as ChatGPT, are the most widely used, with about 70% of employees relying on free, public tools rather than those provided by their employers.

Inappropriate Use of AI

Despite the advantages, nearly 47% of surveyed employees admit to using AI in ways that could be considered inappropriate. A troubling 63% have observed others doing the same. Examples of risky behaviors include:

  • Uploading sensitive information into public AI tools.
  • Relying on AI-generated answers without verification.
  • Hiding the use of AI from employers.

The handling of sensitive company information is a major concern, with 48% of employees admitting to uploading sensitive data into public generative AI tools. Furthermore, 44% acknowledge using AI in ways that violate organizational policies.

Complacency and Risks

The study also found that complacency in using AI is widespread, with 66% of respondents relying on AI outputs without proper evaluation. Consequently, 56% have made mistakes in their work due to AI, highlighting the risks associated with inadequate oversight.

Younger employees (aged 18-34) are particularly likely to engage in both inappropriate and complacent use of AI, raising concerns about potential repercussions for organizations and employees alike. Such mistakes have led to documented cases of financial loss, reputational damage, and privacy breaches.

‘Shadow’ AI Use

Another alarming trend is the prevalence of shadow AI use, where employees do not disclose their use of AI tools. The survey found that:

  • 61% of employees avoid revealing when they use AI.
  • 55% present AI-generated content as their own.
  • 66% use AI tools without knowing if such use is permitted.

This lack of transparency complicates risk management and hampers organizations’ ability to detect and mitigate potential issues. A noticeable gap in governance exists, with only 34% of employees reporting that their organization has a policy guiding the use of generative AI tools.

Need for Training and Governance

To address these challenges, organizations must invest in responsible AI training and develop employees’ AI literacy. The study suggests that self-reported AI literacy predicts not only whether employees adopt AI tools but also their ability to engage with them critically. Training improves how individuals verify AI outputs and understand their limitations.

Despite the importance of AI literacy, less than 47% of employees report having received any training or education related to AI use. Clear policies, guidelines, and accountability measures are essential for fostering a safe and responsible AI environment.

Creating a Supportive Culture

Beyond training, cultivating a psychologically safe work environment is crucial. Employees should feel comfortable sharing how and when they use AI tools. Such a culture not only promotes better oversight and risk management but also encourages a spirit of shared learning and experimentation.

AI has the potential to significantly enhance workplace productivity and innovation. However, achieving this requires a literate workforce, rigorous governance, and an organizational culture that supports transparent and accountable AI use. Without these components, AI risks becoming a liability rather than an asset.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...