Managing AI Governance in Healthcare: What Boards Need to Know
Artificial intelligence (AI) has swiftly transitioned from pilot programs into core clinical and administrative workflows across US hospitals, reshaping how care is delivered, documented, and managed. The pace of adoption is outstripping the pace of oversight, prompting federal regulators, state legislatures, and enforcement agencies to respond simultaneously. For health system boards, keeping pace with these developments requires more than just awareness; governance and oversight must evolve alongside the technology itself. The critical question is whether your board is actively governing AI or merely managing the fallout after the fact.
The Regulatory Landscape: What Health System Leaders Need to Know
Health system boards must be aware of significant and accelerating activity across federal agencies, state legislatures, and enforcement bodies, all moving on parallel tracks and collectively raising the bar for AI governance in healthcare.
At the federal level, the White House has directed agencies to accelerate AI adoption while building governance guardrails. The US Department of Health and Human Services is aligning its operating divisions around common standards while creating channels for providers to participate in shaping implementation. The Centers for Medicare & Medicaid Services (CMS) has already incorporated AI into prior authorization, program integrity, payment models, and analytics. Additionally, the Office for Civil Rights (OCR) has made clear that automated systems do not diminish privacy or civil rights obligations, and that discriminatory effects in federally funded programs must be prevented.
States are advancing both comprehensive and sector-specific AI statutes, including laws that restrict or condition automated decision-making in utilization management. These measures may sometimes exceed existing federal requirements or conflict with them, necessitating careful preemption analysis and coordinated implementation.
Fiduciary Duties in the Age of AI
AI does not create new fiduciary duties for health system board members; however, it raises the stakes for how existing duties are fulfilled.
Duty of Care: Stay Informed, Ask Hard Questions
Board members are expected to act with the care and diligence of a reasonably prudent person when making significant decisions. In the AI context, this means boards cannot treat AI as purely a management or technology concern. When an organization deploys AI tools affecting patient care, billing, or operations, board members must understand the material risks and ask informed questions about how those risks are managed. This does not require board members to become AI experts; it necessitates insisting on clear, meaningful reporting and following up if answers are incomplete.
The Caremark doctrine establishes that directors can face personal liability for a sustained failure to implement and monitor adequate oversight systems. Courts have increasingly applied this standard to technology-related failures, and AI is a natural next step.
Duty of Loyalty: Watch for Conflicts
AI procurement is becoming a significant organizational investment, with a complex and evolving vendor landscape. Directors with financial ties to AI vendors face conflicts of interest that must be disclosed and managed. As AI becomes central to operations, boards should ensure that conflict-of-interest policies are up-to-date and consistently applied.
Duty of Obedience: Keep AI Aligned with Your Mission
For nonprofit health systems, the duty of obedience requires ensuring the organization remains true to its charitable mission. AI deployments that introduce disparate impacts on vulnerable populations or compromise patient privacy could jeopardize mission commitments, tax-exempt status, and compliance with federal civil rights laws.
A Practical Note on Liability
The business judgment rule generally protects directors who make informed, good-faith decisions believed to be in the organization’s best interest. The risk lies not in imperfect decisions but in uninformed ones. Robust minutes, regular AI-risk reporting, and documented committee oversight form the building blocks of a credible defense.
Questions Every Health System Board Should Be Asking Today
Accountability: Who is Responsible for AI Decisions Across the Organization?
AI adoption is outpacing the development of governance frameworks in healthcare, creating an evolving oversight challenge. Boards should ensure management has assigned clear ownership of the full AI lifecycle, including authorization, performance monitoring, and decommissioning authority. Without that clarity, AI can quietly enter workflows, risking patient safety, privacy, and compliance.
A recent survey found that 40 percent of healthcare respondents encountered unsanctioned AI tools in the workplace. Nearly 20 percent reported using them personally, often to speed up workflows or due to a lack of approved tools.
Transparency: What AI Tools Are in Use and Where?
AI functions are embedded across electronic health records, billing systems, and vendor applications, often invisibly. Boards should require management to maintain an up-to-date inventory of AI, identify high-risk systems, and track vendor AI integrations.
Recent litigation highlights the stakes. In Estate of Lokken v. UnitedHealth Group, Inc., the court dismissed several tort claims but allowed breach-of-contract theories to proceed regarding algorithmic denials of post-acute-care benefits. This case underscores the need for transparency into vendor AI logic, data use, model updates, and human-in-the-loop safeguards.
Control: Are There Guardrails in Place Before New AI Tools Go Live?
Bringing a new AI tool into a clinical or operational environment carries legal, safety, and compliance implications that require structured pre-deployment review. This review is no longer optional. The OCR’s recent rules impose affirmative duties to identify patient care decision support tools and mitigate discrimination risks.
Oversight: Are We Getting the Right Information at the Right Time?
Board oversight of AI requires more than periodic updates; it necessitates a documented monitoring system that surfaces meaningful risks before they escalate. Boards should demand regular, structured reporting on AI tool performance, incidents, and corrective actions.
Conclusion
The next era of AI in healthcare will reward organizations that pair innovation with disciplined governance. Health system boards that insist on clear reporting, tested controls, and continuous improvement will position their institutions to capture AI’s benefits while protecting patient safety, privacy, civil rights compliance, and corporate integrity. The time to act is now.