Health Care Without the Hospital: ChatGPT Health and Claude Go Direct to Consumers
Following CES 2026, OpenAI and Anthropic announced consumer-facing generative AI products for health care.
Overview of the New Platforms
ChatGPT Health was launched by OpenAI on January 7, 2026, while Claude for Healthcare by Anthropic followed on January 11, 2026. Both products allow users to connect their medical records and wellness data directly to AI chatbots, marking a significant transition from theoretical benchmarks to actual consumer health applications.
ChatGPT Health provides a dedicated space within OpenAI’s platform for users to link their electronic health records and wellness applications through a partnership with well. This includes data from major platforms such as Apple Health and MyFitnessPal. OpenAI reports that over 230 million people globally utilize ChatGPT weekly for health and wellness inquiries.
To enhance privacy protection, the platform stores health-specific conversations separately from other chats. Conversations on ChatGPT Health are not used to train OpenAI’s models and feature additional encryption beyond standard protections.
Claude for Healthcare
Claude for Healthcare builds on its predecessor, Claude for Life Sciences, by integrating with industry-standard systems like the Centers for Medicare & Medicaid Services (CMS) Coverage Database and the ICD-10 classification system. Consumers can link devices such as HealthEx and wearables to share lab results and health records.
Legal and Risk Issues to Consider
While both platforms offer valuable capabilities, they also introduce risks that users and healthcare providers should evaluate. Users must carefully read the terms of service, as both platforms disclaim liability for the accuracy of health-related outputs and recommend consulting healthcare professionals for personalized guidance.
By integrating with medical records, these AI chatbots shift patient education and engagement approaches, potentially leading to reduced professional consultations.
Privacy Protections and Cybersecurity Risks
Although HIPAA does not typically apply to direct-to-consumer applications, platforms like ChatGPT Health implement privacy measures such as encryption and user-controlled permissions. However, these protections are voluntary and may not prevent data breaches or unauthorized access.
Cybersecurity remains a concern, as aggregating sensitive health data creates lucrative targets for cybercriminals. The risk of data breaches in the healthcare sector continues to be a pressing issue.
Risks in Sensitive Contexts
Generative AI chatbots, particularly in mental health, have faced scrutiny for potentially causing harm. Legal actions have been taken against chatbots like Character.AI, indicating that users might rely too heavily on AI interactions without clinical oversight.
Conclusion
The launch of ChatGPT Health and Claude for Healthcare provides consumers with new AI tools that utilize health and wellness data in user-friendly interfaces. However, understanding the limitations and risks associated with these systems is crucial. Consumer use is governed by vendor-friendly terms, and the relationship between patients and AI platforms lacks the professional oversight that characterizes traditional healthcare.
As these platforms serve as testing grounds for consumer health AI, users, healthcare organizations, and policymakers must engage carefully with evolving regulatory frameworks to ensure safety and efficacy in healthcare delivery.