New Report Untangles the Complex Regulation of Health AI Tools
A new report examines the complex and often fragmented regulatory landscape for health AI tools that operate outside the jurisdiction of the U.S. Food and Drug Administration (FDA). As artificial intelligence becomes increasingly embedded in healthcare, automating administrative tasks, guiding clinical decisions, and powering consumer wellness applications, these tools must navigate a patchwork of federal rules, state laws, and voluntary industry standards.
Understanding the Current Landscape
The report outlines the types of health AI that are not regulated as medical devices, detailing the key federal and state bodies providing oversight, as well as the challenges and opportunities this creates for responsible innovation. While AI tools designed to diagnose, prevent, or treat disease are regulated by the FDA as medical devices, a significant and growing category of health AI operates outside of formal oversight.
Common categories of health AI not typically regulated by the FDA include:
- Administrative AI: Tools that support non-clinical functions such as automating prior authorization, detecting billing fraud, forecasting staffing needs, or managing appointment scheduling.
- Clinical Support and Care Management Tools: AI integrated into electronic health records (EHRs) that analyze patient data to suggest follow-up actions, designed to inform, not replace, a clinician’s judgment.
- Consumer Wellness and Digital Health Tools: Patient-facing apps and devices focused on general wellness, such as fitness trackers, meditation apps, and sleep trackers.
The Role of the 21st Century Cures Act
The 21st Century Cures Act of 2016 was pivotal in defining the FDA’s authority over health software. It clarified that certain clinical decision support (CDS) tools are exempt from being classified as medical devices if they meet specific criteria:
- They do not analyze images or signals (like X-rays or heart rates).
- They use existing medical information from the patient record.
- They support, but do not replace, the final clinical decision.
- Their recommendations can be independently reviewed and understood by the provider.
If a tool fails even one of these criteria, it may be classified as Software as a Medical Device (SaMD) and fall under FDA oversight. This creates a significant “gray area” that can be challenging for developers to navigate.
Distributed Oversight Across Multiple Agencies
For AI tools that are not considered medical devices, oversight is distributed across multiple federal and state agencies, which can create both flexibility and potential gaps. Key agencies involved include:
- Office of the National Coordinator for Health IT (ONC): If an AI tool is integrated into a certified EHR, ONC’s rules require developers to disclose the tool’s intended use, logic, and data inputs, although this only applies to tools supplied by the EHR developer, not third-party or internally developed apps.
- Office for Civil Rights (OCR): Any tool that handles Protected Health Information (PHI) falls under OCR’s enforcement of HIPAA and rules against algorithmic discrimination.
- Federal Trade Commission (FTC): The FTC can take action against companies for deceptive marketing claims about their AI tools and enforces the Health Breach Notification Rule for non-HIPAA-covered apps.
- Centers for Medicare & Medicaid Services (CMS): CMS influences the adoption of AI tools through reimbursement policies and Conditions of Participation for providers.
- State-Level Oversight: States are increasingly active in regulating AI, leading to various approaches, from comprehensive AI risk laws in states like Colorado to targeted consumer protection laws in Illinois and Utah.
Conclusion: Moving Towards a Defined Framework
The report concludes that the current fragmented landscape creates uncertainty for developers, complicates adoption for providers, and leaves gaps in patient protection. As the industry advances, policymakers and industry leaders must collaborate on developing clear frameworks and shared standards to support responsible innovation, ensure patient trust, and improve the quality of care.
The evolution of healthcare AI is underway, transforming care delivery while raising new regulatory questions. A clear view of today’s regulatory landscape is essential for guiding the policy conversations ahead.