AI Tools in Healthcare: Unaddressed Risks to Patient Safety

VHA’s AI Use Poses Potential Patient Safety Risk

The Veterans Health Administration’s (VHA’s) implementation of generative artificial intelligence (AI) chat tools for clinical care has raised significant concerns regarding patient safety, as highlighted in a recent analysis by the Department of Veterans Affairs (VA) Office of Inspector General (OIG).

Overview of Findings

According to the review conducted on January 15, VHA lacks a formal mechanism to identify, track, or resolve risks associated with generative AI. This absence of structured oversight brings into question how effectively patient safety measures are being upheld as these technologies are implemented in clinical settings.

Collaboration and Oversight

The OIG noted that the VHA’s AI initiatives stem from an informal collaboration between key figures, including the acting director of VA’s National AI Institute and the chief AI officer within the Office of Information and Technology. However, it is concerning that these officials did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical application.

Authorized AI Tools

Currently, VHA has authorized two AI chat tools for handling patient health information: Microsoft 365 Copilot Chat and VA GPT, an internal chat tool designed for VA employees. While these tools aim to assist in clinical decision-making, the OIG warns that generative AI systems can yield inaccurate or incomplete outputs. Such inaccuracies may have serious implications for diagnoses or treatment decisions.

Concerns About Patient Safety

The OIG expressed apprehensions regarding VHA’s capability to effectively promote and safeguard patient safety in the absence of a standardized process for managing AI-related risks. The lack of such a process also hinders the establishment of a feedback loop, which is essential for identifying patterns that could enhance the safety and quality of AI chat tools utilized in clinical environments.

Ongoing Review and Recommendations

Given the critical nature of these findings, the OIG is disseminating this preliminary assessment to ensure that VHA leaders are cognizant of the potential risks to patient safety. The review is ongoing, and the OIG has yet to issue formal recommendations. They will continue engaging with VHA leaders and will include a comprehensive analysis of this finding, alongside any additional insights, in their final report.

Sector-Wide Implications

This analysis corresponds with findings from a recent Kiteworks report, which cautions that government agencies are operating in 2026 without the necessary operational governance to manage AI safely. The report indicates that only 10% of governments possess centralized AI governance, while one-third lack dedicated AI controls. Alarmingly, 76% do not have automated mechanisms in place to shut down or revoke high-risk AI systems, and a similar percentage lacks AI anomaly detection.

As the use of AI in healthcare continues to expand, it is imperative for institutions like the VHA to establish robust frameworks that ensure patient safety and effective management of AI technologies.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...