Concerns Rise Over ChatGPT Health Launch in Australia

‘Not Regulated’: Launch of ChatGPT Health in Australia Causes Concern Among Experts

A recent incident involving a 60-year-old man has raised significant concerns regarding the launch of ChatGPT Health in Australia. The man, who had no prior history of mental illness, ended up in a hospital emergency department claiming that his neighbor was poisoning him. Within 24 hours, his condition deteriorated, leading to hallucinations and an attempted escape from the hospital.

Upon investigation, doctors found that the man had been consuming sodium bromide, an inorganic salt primarily used for industrial purposes. He had purchased it online after receiving misleading advice from ChatGPT, which suggested using it as a substitute for table salt due to concerns over his sodium intake. This example underscores the potential dangers of AI-generated health advice, as sodium bromide can lead to a condition known as bromism, characterized by hallucinations and impaired coordination.

Concerns Among Experts

Experts like Alex Ruan, a doctoral researcher in health misinformation at University College London, express alarm over the implications of ChatGPT Health. This AI platform allows users to connect their medical records and wellness applications, generating responses tailored to individual needs. However, Ruan points out that the distinction between general information and medical advice can be unclear, especially when responses appear confident and personalized, even if they are misleading.

“The challenge is that, for many users, it’s not obvious where general information ends and medical advice begins,” Ruan states, highlighting the risks of misguidance and harmful misinformation.

Regulatory Concerns

ChatGPT Health is not categorized as a medical device or diagnostic tool, which means it lacks mandatory safety controls, risk reporting mechanisms, and post-market surveillance. Ruan critiques the methodology used by HealthBench, the tool’s development platform, for being largely undisclosed and not subjected to independent peer-reviewed studies.

An OpenAI spokesperson responded to these concerns by stating that over 200 physicians from 60 countries were involved in advising and refining the models that power ChatGPT Health. They emphasized that the platform features strong privacy protections, with user data encrypted and shared only with consent or under specific circumstances.

Public Concerns and Potential Benefits

Dr. Elizabeth Deveny, CEO of the Consumers Health Forum of Australia, acknowledges the rising out-of-pocket medical costs and long wait times as factors driving people towards AI solutions like ChatGPT Health. While she sees the potential for AI to assist in managing chronic conditions and providing multilingual health information, she warns against users taking AI-generated advice at face value.

“Large global tech companies are moving faster than governments,” Deveny cautions, stressing the need for regulations surrounding privacy, transparency, and data collection. “This isn’t about stopping AI. It’s about acting before mistakes, bias, and misinformation are replicated at speed and scale, in ways that are almost impossible to unwind.”

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...