Decoding the Regulation of Health AI Tools

New Report Untangles the Complex Regulation of Health AI Tools

A new report examines the complex and often fragmented regulatory landscape for health AI tools that operate outside the jurisdiction of the U.S. Food and Drug Administration (FDA). As artificial intelligence becomes increasingly embedded in healthcare, automating administrative tasks, guiding clinical decisions, and powering consumer wellness applications, these tools must navigate a patchwork of federal rules, state laws, and voluntary industry standards.

Understanding the Current Landscape

The report outlines the types of health AI that are not regulated as medical devices, detailing the key federal and state bodies providing oversight, as well as the challenges and opportunities this creates for responsible innovation. While AI tools designed to diagnose, prevent, or treat disease are regulated by the FDA as medical devices, a significant and growing category of health AI operates outside of formal oversight.

Common categories of health AI not typically regulated by the FDA include:

  • Administrative AI: Tools that support non-clinical functions such as automating prior authorization, detecting billing fraud, forecasting staffing needs, or managing appointment scheduling.
  • Clinical Support and Care Management Tools: AI integrated into electronic health records (EHRs) that analyze patient data to suggest follow-up actions, designed to inform, not replace, a clinician’s judgment.
  • Consumer Wellness and Digital Health Tools: Patient-facing apps and devices focused on general wellness, such as fitness trackers, meditation apps, and sleep trackers.

The Role of the 21st Century Cures Act

The 21st Century Cures Act of 2016 was pivotal in defining the FDA’s authority over health software. It clarified that certain clinical decision support (CDS) tools are exempt from being classified as medical devices if they meet specific criteria:

  1. They do not analyze images or signals (like X-rays or heart rates).
  2. They use existing medical information from the patient record.
  3. They support, but do not replace, the final clinical decision.
  4. Their recommendations can be independently reviewed and understood by the provider.

If a tool fails even one of these criteria, it may be classified as Software as a Medical Device (SaMD) and fall under FDA oversight. This creates a significant “gray area” that can be challenging for developers to navigate.

Distributed Oversight Across Multiple Agencies

For AI tools that are not considered medical devices, oversight is distributed across multiple federal and state agencies, which can create both flexibility and potential gaps. Key agencies involved include:

  • Office of the National Coordinator for Health IT (ONC): If an AI tool is integrated into a certified EHR, ONC’s rules require developers to disclose the tool’s intended use, logic, and data inputs, although this only applies to tools supplied by the EHR developer, not third-party or internally developed apps.
  • Office for Civil Rights (OCR): Any tool that handles Protected Health Information (PHI) falls under OCR’s enforcement of HIPAA and rules against algorithmic discrimination.
  • Federal Trade Commission (FTC): The FTC can take action against companies for deceptive marketing claims about their AI tools and enforces the Health Breach Notification Rule for non-HIPAA-covered apps.
  • Centers for Medicare & Medicaid Services (CMS): CMS influences the adoption of AI tools through reimbursement policies and Conditions of Participation for providers.
  • State-Level Oversight: States are increasingly active in regulating AI, leading to various approaches, from comprehensive AI risk laws in states like Colorado to targeted consumer protection laws in Illinois and Utah.

Conclusion: Moving Towards a Defined Framework

The report concludes that the current fragmented landscape creates uncertainty for developers, complicates adoption for providers, and leaves gaps in patient protection. As the industry advances, policymakers and industry leaders must collaborate on developing clear frameworks and shared standards to support responsible innovation, ensure patient trust, and improve the quality of care.

The evolution of healthcare AI is underway, transforming care delivery while raising new regulatory questions. A clear view of today’s regulatory landscape is essential for guiding the policy conversations ahead.

More Insights

Critical Evaluations of AI Compliance Under the EU Act

The EU’s Artificial Intelligence Act introduces new obligations for organizations regarding general-purpose AI models, set to take effect in August. Dealmakers must enhance their due diligence...

Microsoft’s Science Chief Opposes Trump’s AI Regulation Ban

Microsoft's chief scientist, Dr. Eric Horvitz, has criticized Donald Trump's proposal to ban state-level AI regulations, arguing that it could hinder progress in AI development. He emphasizes the need...

AI Regulation: Europe’s Urgent Challenge Amid US Pressure

Michael McNamara discusses the complexities surrounding the regulation of AI in Europe, particularly in light of US pressure and the challenges of balancing innovation with the protection of creative...

Decoding the Regulation of Health AI Tools

A new report from the Bipartisan Policy Center examines the complex regulatory landscape for health AI tools that operate outside the jurisdiction of the FDA. As AI becomes more integrated into...

Texas Takes the Lead: New AI Governance Law Unveiled

The Texas Responsible Artificial Intelligence Governance Act (TRAIGA), passed on May 31, 2025, establishes disclosure requirements for AI developers and deployers while outlining prohibited uses of AI...

Texas Enacts Groundbreaking AI Governance Law

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, significantly altering the original draft that proposed strict regulations on "high-risk"...

G7 Summit Fails to Address Urgent AI Governance Needs

At the recent G7 summit in Canada, discussions primarily focused on economic opportunities related to AI, while governance issues for AI systems were notably overlooked. This shift towards...

Africa’s Bold Move Towards Sovereign AI Governance

At the Internet Governance Forum (IGF) 2025 in Oslo, African leaders called for urgent action to develop sovereign and ethical AI systems tailored to local needs, emphasizing the necessity for...

Top 10 Compliance Challenges in AI Regulations

As AI technology advances, the challenge of establishing effective regulations becomes increasingly complex, with different countries adopting varying approaches. This regulatory divergence poses...