Decoding the Regulation of Health AI Tools

New Report Untangles the Complex Regulation of Health AI Tools

A new report examines the complex and often fragmented regulatory landscape for health AI tools that operate outside the jurisdiction of the U.S. Food and Drug Administration (FDA). As artificial intelligence becomes increasingly embedded in healthcare, automating administrative tasks, guiding clinical decisions, and powering consumer wellness applications, these tools must navigate a patchwork of federal rules, state laws, and voluntary industry standards.

Understanding the Current Landscape

The report outlines the types of health AI that are not regulated as medical devices, detailing the key federal and state bodies providing oversight, as well as the challenges and opportunities this creates for responsible innovation. While AI tools designed to diagnose, prevent, or treat disease are regulated by the FDA as medical devices, a significant and growing category of health AI operates outside of formal oversight.

Common categories of health AI not typically regulated by the FDA include:

  • Administrative AI: Tools that support non-clinical functions such as automating prior authorization, detecting billing fraud, forecasting staffing needs, or managing appointment scheduling.
  • Clinical Support and Care Management Tools: AI integrated into electronic health records (EHRs) that analyze patient data to suggest follow-up actions, designed to inform, not replace, a clinician’s judgment.
  • Consumer Wellness and Digital Health Tools: Patient-facing apps and devices focused on general wellness, such as fitness trackers, meditation apps, and sleep trackers.

The Role of the 21st Century Cures Act

The 21st Century Cures Act of 2016 was pivotal in defining the FDA’s authority over health software. It clarified that certain clinical decision support (CDS) tools are exempt from being classified as medical devices if they meet specific criteria:

  1. They do not analyze images or signals (like X-rays or heart rates).
  2. They use existing medical information from the patient record.
  3. They support, but do not replace, the final clinical decision.
  4. Their recommendations can be independently reviewed and understood by the provider.

If a tool fails even one of these criteria, it may be classified as Software as a Medical Device (SaMD) and fall under FDA oversight. This creates a significant “gray area” that can be challenging for developers to navigate.

Distributed Oversight Across Multiple Agencies

For AI tools that are not considered medical devices, oversight is distributed across multiple federal and state agencies, which can create both flexibility and potential gaps. Key agencies involved include:

  • Office of the National Coordinator for Health IT (ONC): If an AI tool is integrated into a certified EHR, ONC’s rules require developers to disclose the tool’s intended use, logic, and data inputs, although this only applies to tools supplied by the EHR developer, not third-party or internally developed apps.
  • Office for Civil Rights (OCR): Any tool that handles Protected Health Information (PHI) falls under OCR’s enforcement of HIPAA and rules against algorithmic discrimination.
  • Federal Trade Commission (FTC): The FTC can take action against companies for deceptive marketing claims about their AI tools and enforces the Health Breach Notification Rule for non-HIPAA-covered apps.
  • Centers for Medicare & Medicaid Services (CMS): CMS influences the adoption of AI tools through reimbursement policies and Conditions of Participation for providers.
  • State-Level Oversight: States are increasingly active in regulating AI, leading to various approaches, from comprehensive AI risk laws in states like Colorado to targeted consumer protection laws in Illinois and Utah.

Conclusion: Moving Towards a Defined Framework

The report concludes that the current fragmented landscape creates uncertainty for developers, complicates adoption for providers, and leaves gaps in patient protection. As the industry advances, policymakers and industry leaders must collaborate on developing clear frameworks and shared standards to support responsible innovation, ensure patient trust, and improve the quality of care.

The evolution of healthcare AI is underway, transforming care delivery while raising new regulatory questions. A clear view of today’s regulatory landscape is essential for guiding the policy conversations ahead.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...