Wolters Kluwer Survey Reveals Proliferation of Unsanctioned AI Tools in Healthcare
A recent survey of healthcare professionals and administrators has uncovered a concerning trend: the widespread use of unauthorized AI tools, termed “shadow AI,” across hospitals and health systems in the U.S. This trend raises significant concerns regarding patient safety, data privacy, and regulatory compliance.
Key Findings
The survey indicates that 40% of respondents have encountered unauthorized AI tools within their organizations, while nearly 20% admitted to using such tools. The primary drivers behind this trend include:
- Need for Speed: Half of the respondents cited the necessity for faster workflows as a key reason for utilizing shadow AI.
- Curiosity and Experimentation: For many healthcare providers, the desire to explore new technology ranked higher than functionality.
- Direct Patient Care: One in ten users reported employing unauthorized AI tools directly in patient care scenarios, raising red flags about safety protocols.
Policy Development Gaps
The survey revealed stark disparities in policy development and awareness between healthcare providers and administrators:
- Centralized Policy Ownership: Administrators are three times more likely than providers to be involved in healthcare AI policy development (30% vs. 9%).
- Awareness Discrepancies: 29% of providers are aware of major policies compared to only 17% of administrators.
Optimism About AI’s Impact
Despite the challenges posed by shadow AI, a majority of healthcare professionals maintain a positive outlook regarding AI’s potential benefits:
- Frequent Users: Over half of healthcare professionals regularly use AI tools in their work.
- Positive Outlook: Nearly 90% of those surveyed believe that AI will significantly improve healthcare within the next five years.
- Data Analysis: The most common use of AI among providers (60%) and administrators (78%) is for data analysis, indicating its deep integration into daily workflows.
Concerns About Patient Safety
Underpinning the excitement surrounding AI is a palpable concern for patient safety:
- Top Concern: Both providers (25%) and administrators (26%) rank patient safety as their foremost concern regarding AI in healthcare.
- Privacy and Data Breaches: Administrators list privacy as their second concern, while providers are more worried about inaccuracies in AI outputs.
Health Data Security Risks
Nearly 23% of healthcare professionals reported concerns about privacy and security risks associated with AI, highlighting fears of data breaches and unauthorized access.
Expert Perspective
Scott Simeone, SVP and Chief Information Officer at Tufts Medicine, emphasizes the need for robust organizational governance in the adoption of AI in healthcare:
“GenAI presents high potential for value creation in healthcare, but scaling it relies more on the maturity of organizational governance than on technology. While progress has been made, further efforts are necessary to evolve tools for control and monitoring to ensure they are effective in clinical settings.”
As the use of AI grows within health systems, the imperative for enterprise-grade controls, transparency, and literacy becomes increasingly clear, ensuring that clinicians and patients alike understand the role of AI in decision-making processes.
In conclusion, while the integration of AI into healthcare presents numerous opportunities for improvement, the emergence of shadow AI underscores the need for immediate action in governance and compliance to safeguard patient outcomes.