FDA Updates Policy for AI-Enabled Health Technologies

FDA Eases Oversight for AI-Enabled Clinical Decision Support Software and Wearables

The FDA (Food and Drug Administration) has announced significant policy shifts aimed at facilitating market entry for certain digital health technologies, particularly focusing on AI-enabled and generative AI-enabled clinical decision support software (CDS) and consumer wearables. This announcement was made during the Consumer Electronics Show, marking a pivotal moment for the integration of advanced technologies in healthcare.

Policy Changes Overview

The 2026 updates to FDA guidances expand enforcement discretion for specific CDS functions and broaden the general wellness policy for non-invasive wearables that report physiological metrics. This expansion occurs while maintaining risk-based oversight for software that substitutes clinical judgment or influences time-critical care.

Clinical Decision Support Software (CDS)

The revised 2026 CDS guidance introduces expanded enforcement discretion for software that provides a single, clinically appropriate recommendation, as long as the software meets the Non-Device CDS criteria. This includes enabling healthcare providers to independently review the basis for the recommendation.

This change applies to AI technologies, including certain generative AI features, provided that clinicians can understand and verify the underlying logic and data inputs.

General Wellness Guidance

The 2026 wellness guidance clarifies that a broader range of non-invasive consumer wearables that report physiological metrics—such as blood pressure, oxygen saturation, or glucose-related signals—may fall under enforcement discretion if intended solely for general wellness. This is a significant shift from 2019 policies and accommodates more AI-derived metrics and insights when limited to wellness use.

Implications for Manufacturers

CDS Developers

The revised policy alleviates a core friction point: the necessity to engineer products around “single recommendation” outputs to avoid device classification. If a clinically appropriate single recommendation is made and the tool meets Non-Device CDS criteria—including transparent, clinician-reviewable logic—the FDA intends to exercise enforcement discretion. This can lead to lower transaction costs, accelerate time-to-market, and unlock investment in AI features.

However, it is crucial to maintain explainability and clinician reviewability. If models are opaque, particularly large language models that operate as black boxes, or if they provide directive outputs, device oversight will still be expected.

Wearables

The broadened wellness posture allows for more non-invasive devices that report physiological measures—often computed using AI—to remain outside device regulation when framed strictly as wellness. Product teams can integrate additional sensors and AI-derived insights, as long as marketing and labeling stay within general wellness parameters.

Any diagnostic or treatment claims will trigger device status, which comes with quality and premarket obligations. The FDA has also indicated priority enforcement against higher-risk use cases, particularly where AI outputs guide clinical care without sufficient human oversight.

Application to WHOOP and Similar Products

The shift in FDA guidance is exemplified by last year’s WHOOP situation concerning an uncleared blood pressure feature. Under the new guidance, non-invasive features that present physiological readings can remain classified under wellness if:

  • They are intended solely for general wellness.
  • They avoid diagnostic or treatment claims.
  • They include non-diagnostic user notifications (e.g., suggesting professional evaluation when values fall outside wellness ranges).
  • They remain low risk and non-invasive.

With careful claims and user messaging, AI-derived insights that previously risked device classification may now align with enforcement discretion. Any clinical or diagnostic positioning will still necessitate a formal device pathway.

Next Steps for Manufacturers

For CDS Manufacturers

Manufacturers should reassess their portfolios against the 2026 criteria to determine whether any single-recommendation AI features can qualify as Non-Device CDS with adequate transparency and human oversight. If not, they should plan for device pathways and early FDA interactions.

For Wearable Companies

Wearable companies need to recalibrate their labeling to leverage the expanded wellness policy for AI-generated insights while strictly avoiding any diagnostic or treatment implications.

The Road Ahead

FDA’s Strategic Approach

The FDA’s approach signifies a strategic rebalancing, showcasing a greater tolerance for innovation at the low-risk, wellness, and clinician-aid end of the spectrum—especially for AI tools. However, there will be continued scrutiny where software substitutes for clinical judgment or affects time-sensitive care.

What to Expect

In the near future, we can anticipate increased reliance on enforcement discretion for Non-Device CDS, expanded wellness safe harbors for non-invasive wearables reporting physiological metrics, and sustained risk-based oversight for higher-impact AI uses—particularly black-box or generative AI models in clinical workflows. The emphasis will remain on human oversight, transparency, and post-market performance.

Key Takeaways

  • Wellness Expansion: Non-invasive wearables with AI-derived physiological metrics can remain outside device regulation if strictly framed as wellness.
  • Single Recommendation Freedom: CDS tools no longer need to avoid single recommendations if clinically appropriate and transparent.
  • Transparency is Key: Explainability and clinician reviewability are critical gating factors for enforcement discretion.
  • Claims Matter: Disciplined marketing and labeling are essential to avoid triggering device classification.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...