FDA’s New AI Guidance: Balancing Innovation and Oversight

What the FDA’s New Guidance Signals: Regulatory Restraint in the AI Era

Public authorities are under growing pressure to respond to rapid technological change without slowing it down. In health care, this tension is especially visible in artificial intelligence, where products can range from low-stakes lifestyle tools to high-risk software that directly influences diagnosis or treatment.

Against this backdrop, U.S. Food and Drug Administration (FDA) Commissioner Marty Makary used the 2026 Consumer Electronics Show (CES) to frame a deregulatory message: “The government doesn’t need to be regulating everything,” and regulators should “get out of the way” where oversight is not warranted.

The CES Announcement: Two Guidance Documents, One Theme

On January 6, 2026, the FDA released two guidance documents focused on clarifying when digital health tools fall outside the agency’s device oversight. The stated aim, echoed in Makary’s public remarks, is to reduce unnecessary regulatory burden while keeping a clear pathway for “medical-grade” products that make clinical claims or pose higher risks.

Although the announcement emphasized “AI,” the practical impact is less about endorsing any specific model and more about drawing regulatory boundaries that affect many AI-enabled products, especially wearables and decision-support tools.

Guidance 1: General Wellness Products and Wearables

The first document, “General Wellness: Policy for Low Risk Devices” (January 2026), updates the FDA’s approach to low-risk “general wellness products,” including certain wearable devices and lifestyle-focused software. The guidance explains that products intended to maintain or encourage a healthy lifestyle, and that are unrelated to diagnosing, curing, mitigating, preventing, or treating disease, may fall outside FDA device regulation under the Federal Food, Drug, and Cosmetic Act as amended by the 21st Century Cures Act.

In practice, the policy direction described publicly is that non-medical-grade wearables providing general health information can proceed without the full weight of premarket review, while products that market themselves as clinically accurate, “medical grade,” or intended for disease-related decisions remain more likely to be treated as regulated devices.

Guidance 2: Clinical Decision Support Software

The second document, “Clinical Decision Support Software” (January 2026), addresses software that supports health care professionals in making clinical decisions. It focuses on clarifying which clinical decision support (CDS) functions are excluded from the statutory definition of a “device” and provides examples distinguishing non-device CDS, device CDS, and functions that may be subject to enforcement discretion.

This area matters because modern CDS can be powered by AI and can shape how clinicians interpret symptoms, labs, images, risk scores, or treatment options. The FDA’s revised framing seeks to reduce uncertainty for developers and users by clarifying when oversight applies and when it does not.

Why this Matters for Oncology and Cancer Care Pathways

Oncology is an especially relevant setting for these changes because the care journey often extends beyond the clinic. Wearables and patient-facing software can support symptom monitoring, activity and sleep tracking, detection of physiologic changes during treatment, and survivorship wellness efforts. Clearer regulatory boundaries may lower barriers for iterative, consumer-facing tools that aim to support healthier behavior without claiming to diagnose cancer, manage chemotherapy dosing, or substitute for clinician judgment.

At the same time, oncology is also a domain where “decision support” can be consequential, including tools that help clinicians assess adverse event risk, triage symptoms, or interpret complex clinical data. For higher-stakes CDS, particularly when outputs could reasonably be used to guide treatment, regulatory clarity is valuable not only to developers but also to hospitals that must evaluate safety, accountability, and clinical governance before adoption.

The Policy Trade-Off: Speed and Clarity vs Safety and Trust

A lighter-touch approach can improve predictability for innovators and reduce time-to-market for low-risk tools. It can also help investors and health systems distinguish between wellness products and medical devices, a distinction that has often been blurred by marketing language and consumer expectations.

However, reducing oversight does not remove core challenges associated with AI in health contexts. Even when tools are positioned as “informational,” real-world use can drift toward clinical reliance, especially if interfaces present outputs with medical-sounding confidence. This makes transparency about intended use, limitations, and appropriate clinical escalation essential. It also underscores why the FDA continues to emphasize a separate lane for products that are genuinely medical-grade or that present meaningful safety risks.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...