‘Not Regulated’: Launch of ChatGPT Health in Australia Causes Concern Among Experts
A recent incident involving a 60-year-old man has raised significant concerns regarding the launch of ChatGPT Health in Australia. The man, who had no prior history of mental illness, ended up in a hospital emergency department claiming that his neighbor was poisoning him. Within 24 hours, his condition deteriorated, leading to hallucinations and an attempted escape from the hospital.
Upon investigation, doctors found that the man had been consuming sodium bromide, an inorganic salt primarily used for industrial purposes. He had purchased it online after receiving misleading advice from ChatGPT, which suggested using it as a substitute for table salt due to concerns over his sodium intake. This example underscores the potential dangers of AI-generated health advice, as sodium bromide can lead to a condition known as bromism, characterized by hallucinations and impaired coordination.
Concerns Among Experts
Experts like Alex Ruan, a doctoral researcher in health misinformation at University College London, express alarm over the implications of ChatGPT Health. This AI platform allows users to connect their medical records and wellness applications, generating responses tailored to individual needs. However, Ruan points out that the distinction between general information and medical advice can be unclear, especially when responses appear confident and personalized, even if they are misleading.
“The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins,” Ruan states, highlighting the risks of misguidance and harmful misinformation.
Regulatory Concerns
ChatGPT Health is not categorized as a medical device or diagnostic tool, which means it lacks mandatory safety controls, risk reporting mechanisms, and post-market surveillance. Ruan critiques the methodology used by HealthBench, the tool’s development platform, for being largely undisclosed and not subjected to independent peer-reviewed studies.
An OpenAI spokesperson responded to these concerns by stating that over 200 physicians from 60 countries were involved in advising and refining the models that power ChatGPT Health. They emphasized that the platform features strong privacy protections, with user data encrypted and shared only with consent or under specific circumstances.
Public Concerns and Potential Benefits
Dr. Elizabeth Deveny, CEO of the Consumers Health Forum of Australia, acknowledges the rising out-of-pocket medical costs and long wait times as factors driving people towards AI solutions like ChatGPT Health. While she sees the potential for AI to assist in managing chronic conditions and providing multilingual health information, she warns against users taking AI-generated advice at face value.
“Large global tech companies are moving faster than governments,” Deveny cautions, stressing the need for regulations surrounding privacy, transparency, and data collection. “This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind.”