When AI Decides Your Care: The Governance Questions Every Stakeholder Should Be Asking
An investigation uncovered an AI tool used by a major insurer that denied more than 300,000 claims in just two months. The denials were generated in minutes—faster than any human reviewer could read a single file. Most patients never appealed these decisions, assuming the algorithm knew something their doctor did not. This has led to a governance crisis that remains unaddressed.
The Case of the Denied Claim
A 62-year-old man with a documented complex cardiac condition was denied coverage for specialized cardiac rehabilitation. His cardiologist deemed it clinically essential, yet the insurer’s automated system flagged the treatment codes as not meeting medical necessity criteria. He received no explanation whether a physician reviewed his file or if the decision was algorithmically generated in seconds. This patient assumed the system was more informed than his doctor, resulting in his decision not to appeal. He was among the over 80% of patients who never do so, and part of the less than 0.2% of denials that would have almost certainly been overturned had he sought an appeal.
Accountability and Rights
The core question is deceptively simple: when a patient disagrees with a healthcare decision influenced by AI, who is accountable, and what rights does the patient actually have? As documented by the Agency for Healthcare Research and Quality in 2024, these unresolved issues are central to AI in healthcare. What should patients do when they disagree with an algorithm? Who assumes liability for decisions based on AI recommendations? These questions are not just theoretical; they represent operational realities in healthcare.
Current Governance Landscape
The ECRI ranked insufficient AI governance as the number two patient safety threat for 2025. Notably, only 16% of hospital executives in 2023 reported having a system-wide governance policy for AI use and data access. This vacuum of accountability is a daily operational reality, where AI systems influence clinical and coverage decisions while stakeholders negotiate responsibility.
CMS Clarification
In February 2024, the Centers for Medicare & Medicaid Services (CMS) clarified that an algorithm cannot override a patient’s individual medical circumstances. AI may assist in coverage determinations but cannot replace the individualized review demanded by a physician’s recommendation. While this ruling is significant, it lacks the operational framework necessary to enforce compliance effectively.
Stakeholder Responsibilities
Five stakeholders are central to every AI-influenced care decision: insurer, provider, regulator, patient, and the technology itself. None has accepted full accountability, prompting the need for all stakeholders to begin asking: What is my role when the algorithm gets it wrong?
Insurers’ Responsibilities
Insurers must consider whether their AI model makes the final determination or provides input to a human reviewer who exercises independent clinical judgment before any denial is communicated. Starting in 2026, CMS will require payers to provide specific reasons for every AI-assisted denial and publish aggregate approval data. This is not merely a reporting burden; it is an accountability framework that organizations must adopt.
Providers’ Responsibilities
Providers need to ask if their institution has documented protocols for recording, escalating, and resolving disagreements when AI-generated clinical decision support contradicts their judgment. The American College of Physicians’ 2024 policy position states that AI should augment physician decision-making, not replace it. The governance question for providers is whether workflows genuinely reflect this principle.
Patients’ Rights
Patients must inquire whether they have the right to know when an AI system influenced a decision about their care and if there is a clear path to appeal that decision. The answer varies by state, meaning patients’ rights are inconsistent based on their location.
Regulators’ Questions
Regulators should ask whether “meaningful human review” is defined in a way that prevents organizations from merely routing decisions through a human who rubber-stamps AI outputs.
Immediate Steps Forward
No stakeholder has all the answers yet, but that is no excuse for inaction. Every organization deploying AI in clinical or coverage decisions should mandate a human-generated audit trail for every AI-influenced outcome. This should not just be a compliance exercise but a foundational element for when a patient challenges a decision. Every denial should carry a plain-language explanation of whether an AI model was involved and what the patient’s appeal rights are.
Moreover, governance committees overseeing healthcare AI should include providers, patients and their advocates, insurers, and healthcare regulators, with the authority to halt deployment when transparency obligations are unmet.
The Path Forward
Healthcare is at a pivotal moment in history. AI technologies have the potential to compress diagnostic timelines, catch conditions earlier, reduce the administrative burden on clinicians, and extend quality care to underserved populations. However, the realization of this potential depends on effective governance. Governance challenges should not obstruct AI’s promise in healthcare; rather, they should pave the way for it.