Shadow AI: Unseen Risks in the Workplace

Workers’ Use of Shadow AI Presents Compliance and Reputational Risks

Amid the growing adoption of AI tools, a new problem is emerging – when employers don’t provide sanctioned AI tools, many workers will use the ones they prefer anyway. In sectors like healthcare, manufacturing, and financial services, the use of so-called shadow AI tools in the workplace surged more than 200% year over year. Among enterprise employees who use generative AI at work, nearly seven in ten access public GenAI assistants like ChatGPT. Research shows that 46% of office workers, including IT professionals who understand the risks, use AI tools their employers didn’t provide.

“Given that AI isn’t going away, companies need to approach adoption proactively rather than reactively,” said a security expert. This proactive approach looks like ensuring employees use AI securely, with guardrails preventing potential security and privacy issues while maintaining an open dialogue about AI’s benefits and risks to enhance work experience.

How Shadow AI Makes Organizations Vulnerable

Parallel to shadow IT, shadow AI refers to the use of unapproved or unsanctioned artificial intelligence tools. Examples include using Microsoft Copilot with a personal account on a work device or entering company data into a public version of ChatGPT. These unsanctioned AI tools can include copilots, agents, workflow automaters, chatbots, and generative applications.

While shadow AI tools could boost productivity or worker satisfaction, their use has considerable organizational downsides. Most significantly, shadow AI tools threaten data privacy and security. Inputting sensitive data like customer information or financial records into public AI tools can trigger violations of regulations such as GDPR or HIPAA and may lead to leaks of proprietary data.

The compliance and reputational risks of unauthorized use of third-party AI applications are particularly pronounced in highly regulated sectors like finance and healthcare. Without robust training on proper usage and data best practices, well-meaning workers can easily violate compliance rules or compromise private information.

In addition to compliance issues, any unapproved software can introduce network or system vulnerabilities. Shadow AI may result in malicious code, phishing attempts, or other security breaches going undetected until it is too late. Furthermore, AI tools present important considerations around confidentiality and data use, as many shadow AI tools are based on free versions that have permissive terms governing the use of inputs and outputs, potentially allowing for data to be used to train or improve models without the company’s knowledge.

Strategies to Deter Shadow AI Use

Shadow AI is not merely an issue of a lax tech environment or gaps in IT security; it indicates that employees have unmet needs regarding existing AI tools and policies. “Whether for generative AI or other tools, shadow IT is the result of not having a defined and reasonable way to test tools or get work done,” the expert explained.

To combat shadow AI, organizations should enable employees to be active partners in developing AI governance. This can be achieved by fostering open workplace dialogue about AI use and tools, allowing workers to discuss which tools help them succeed while IT shares how to use AI tools safely. Research, surveys, audits, and informal conversations can reveal why these tools are appealing to workers and inform the selection of suitable company-approved alternatives.

Implementing a risk-first approach to AI adoption is essential, focusing on the data that goes into the AI and how the company handles that data. This approach is akin to vendor risk management, allowing organizations to leverage established practices adjusted for AI-focused questions.

Official adoption and integration of AI copilots can help mitigate shadow AI use. Workers unable to use a sanctioned copilot may turn to external tools, so prioritizing AI copilot integration is critical to help maintain privacy and security while realizing the benefits of these tools. Many popular public AI tools offer private or organization-specific versions of their service, enabling the use of familiar tools in a secure environment.

Moreover, it’s crucial to ensure that the selected tools are effectively integrated into the organization and that the company understands which data it can access and the purposes for which it is used. A dedicated team should run controlled tests of generative AI tools within specific teams, complete with feedback loops and gradual rollout.

Ultimately, it’s not just about jumping on the AI bandwagon; it’s about knowing if the tool is worth it for both the business and the people using it. Employees are more likely to follow best practices when the approved tool is effective, user-friendly, and mandated by official policy.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...