Rising Concerns Over Unauthorized AI Tools in Healthcare

Wolters Kluwer Survey Reveals Proliferation of Unsanctioned AI Tools in Healthcare

A recent survey of healthcare professionals and administrators has uncovered a concerning trend: the widespread use of unauthorized AI tools, termed “shadow AI,” across hospitals and health systems in the U.S. This trend raises significant concerns regarding patient safety, data privacy, and regulatory compliance.

Key Findings

The survey indicates that 40% of respondents have encountered unauthorized AI tools within their organizations, while nearly 20% admitted to using such tools. The primary drivers behind this trend include:

  • Need for Speed: Half of the respondents cited the necessity for faster workflows as a key reason for utilizing shadow AI.
  • Curiosity and Experimentation: For many healthcare providers, the desire to explore new technology ranked higher than functionality.
  • Direct Patient Care: One in ten users reported employing unauthorized AI tools directly in patient care scenarios, raising red flags about safety protocols.

Policy Development Gaps

The survey revealed stark disparities in policy development and awareness between healthcare providers and administrators:

  • Centralized Policy Ownership: Administrators are three times more likely than providers to be involved in healthcare AI policy development (30% vs. 9%).
  • Awareness Discrepancies: 29% of providers are aware of major policies compared to only 17% of administrators.

Optimism About AI’s Impact

Despite the challenges posed by shadow AI, a majority of healthcare professionals maintain a positive outlook regarding AI’s potential benefits:

  • Frequent Users: Over half of healthcare professionals regularly use AI tools in their work.
  • Positive Outlook: Nearly 90% of those surveyed believe that AI will significantly improve healthcare within the next five years.
  • Data Analysis: The most common use of AI among providers (60%) and administrators (78%) is for data analysis, indicating its deep integration into daily workflows.

Concerns About Patient Safety

Underpinning the excitement surrounding AI is a palpable concern for patient safety:

  • Top Concern: Both providers (25%) and administrators (26%) rank patient safety as their foremost concern regarding AI in healthcare.
  • Privacy and Data Breaches: Administrators list privacy as their second concern, while providers are more worried about inaccuracies in AI outputs.

Health Data Security Risks

Nearly 23% of healthcare professionals reported concerns about privacy and security risks associated with AI, highlighting fears of data breaches and unauthorized access.

Expert Perspective

Scott Simeone, SVP and Chief Information Officer at Tufts Medicine, emphasizes the need for robust organizational governance in the adoption of AI in healthcare:

“GenAI presents high potential for value creation in healthcare, but scaling it relies more on the maturity of organizational governance than on technology. While progress has been made, further efforts are necessary to evolve tools for control and monitoring to ensure they are effective in clinical settings.”

As the use of AI grows within health systems, the imperative for enterprise-grade controls, transparency, and literacy becomes increasingly clear, ensuring that clinicians and patients alike understand the role of AI in decision-making processes.

In conclusion, while the integration of AI into healthcare presents numerous opportunities for improvement, the emergence of shadow AI underscores the need for immediate action in governance and compliance to safeguard patient outcomes.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...