AI Accountability in Healthcare: Who Is Responsible?

When AI Decides Your Care: The Governance Questions Every Stakeholder Should Be Asking

An investigation uncovered an AI tool used by a major insurer that denied more than 300,000 claims in just two months. The denials were generated in minutes—faster than any human reviewer could read a single file. Most patients never appealed these decisions, assuming the algorithm knew something their doctor did not. This has led to a governance crisis that remains unaddressed.

The Case of the Denied Claim

A 62-year-old man with a documented complex cardiac condition was denied coverage for specialized cardiac rehabilitation. His cardiologist deemed it clinically essential, yet the insurer’s automated system flagged the treatment codes as not meeting medical necessity criteria. He received no explanation whether a physician reviewed his file or if the decision was algorithmically generated in seconds. This patient assumed the system was more informed than his doctor, resulting in his decision not to appeal. He was among the over 80% of patients who never do so, and part of the less than 0.2% of denials that would have almost certainly been overturned had he sought an appeal.

Accountability and Rights

The core question is deceptively simple: when a patient disagrees with a healthcare decision influenced by AI, who is accountable, and what rights does the patient actually have? As documented by the Agency for Healthcare Research and Quality in 2024, these unresolved issues are central to AI in healthcare. What should patients do when they disagree with an algorithm? Who assumes liability for decisions based on AI recommendations? These questions are not just theoretical; they represent operational realities in healthcare.

Current Governance Landscape

The ECRI ranked insufficient AI governance as the number two patient safety threat for 2025. Notably, only 16% of hospital executives in 2023 reported having a system-wide governance policy for AI use and data access. This vacuum of accountability is a daily operational reality, where AI systems influence clinical and coverage decisions while stakeholders negotiate responsibility.

CMS Clarification

In February 2024, the Centers for Medicare & Medicaid Services (CMS) clarified that an algorithm cannot override a patient’s individual medical circumstances. AI may assist in coverage determinations but cannot replace the individualized review demanded by a physician’s recommendation. While this ruling is significant, it lacks the operational framework necessary to enforce compliance effectively.

Stakeholder Responsibilities

Five stakeholders are central to every AI-influenced care decision: insurer, provider, regulator, patient, and the technology itself. None has accepted full accountability, prompting the need for all stakeholders to begin asking: What is my role when the algorithm gets it wrong?

Insurers’ Responsibilities

Insurers must consider whether their AI model makes the final determination or provides input to a human reviewer who exercises independent clinical judgment before any denial is communicated. Starting in 2026, CMS will require payers to provide specific reasons for every AI-assisted denial and publish aggregate approval data. This is not merely a reporting burden; it is an accountability framework that organizations must adopt.

Providers’ Responsibilities

Providers need to ask if their institution has documented protocols for recording, escalating, and resolving disagreements when AI-generated clinical decision support contradicts their judgment. The American College of Physicians’ 2024 policy position states that AI should augment physician decision-making, not replace it. The governance question for providers is whether workflows genuinely reflect this principle.

Patients’ Rights

Patients must inquire whether they have the right to know when an AI system influenced a decision about their care and if there is a clear path to appeal that decision. The answer varies by state, meaning patients’ rights are inconsistent based on their location.

Regulators’ Questions

Regulators should ask whether “meaningful human review” is defined in a way that prevents organizations from merely routing decisions through a human who rubber-stamps AI outputs.

Immediate Steps Forward

No stakeholder has all the answers yet, but that is no excuse for inaction. Every organization deploying AI in clinical or coverage decisions should mandate a human-generated audit trail for every AI-influenced outcome. This should not just be a compliance exercise but a foundational element for when a patient challenges a decision. Every denial should carry a plain-language explanation of whether an AI model was involved and what the patient’s appeal rights are.

Moreover, governance committees overseeing healthcare AI should include providers, patients and their advocates, insurers, and healthcare regulators, with the authority to halt deployment when transparency obligations are unmet.

The Path Forward

Healthcare is at a pivotal moment in history. AI technologies have the potential to compress diagnostic timelines, catch conditions earlier, reduce the administrative burden on clinicians, and extend quality care to underserved populations. However, the realization of this potential depends on effective governance. Governance challenges should not obstruct AI’s promise in healthcare; rather, they should pave the way for it.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...