A 60-year-old man spent three weeks hospitalized with severe psychosis, paranoia, and hallucinations after replacing table salt with toxic sodium bromide, a decision guided by ChatGPT’s advice. According to a case study published in the Annals of Internal Medicine: Clinical Cases, the man consulted the AI chatbot about eliminating sodium chloride from his diet. ChatGPT suggested bromide salts as a substitute, failing to warn that they are unfit for human consumption and are often used in industrial cleaning or pool treatments.
The Incident
Inspired by his background in nutrition, the man sourced sodium bromide online, using it daily for three months. He later arrived at an emergency room dehydrated, paranoid about poisoned water, and accusing his neighbor of plotting against him. Medical tests revealed acute bromism, a rare condition caused by bromide toxicity with levels 200 times higher than normal (1,700 mg/L vs. 0.9–7.3 mg/L). Doctors noted facial acne, cherry angiomas, and neurological deterioration, including auditory and visual hallucinations. He required antipsychotic drugs and aggressive saline treatments to flush the toxin from his system.
ChatGPT’s Critical Gaps
While OpenAI’s terms explicitly state its services “are not intended for use in the diagnosis or treatment of any health condition” and outputs “may not always be accurate,” the chatbot’s response lacked critical safeguards. When doctors replicated the query, older ChatGPT versions (3.5 or 4.0) listed bromide as a chloride alternative without clarifying its lethality or requesting context about the user’s intent. Newer versions now ask follow-up questions (e.g., “Are you seeking alternatives for cooking or cleaning?”), But this case occurred before those updates.
Mount Sinai researchers recently confirmed AI chatbots remain “highly vulnerable” to amplifying medical misinformation. In tests, models confidently invented explanations for fictitious diseases when fed false terms. A simple prompt like “This information might be inaccurate” reduced errors by 50%, highlighting the need for built-in skepticism.
Broader Implications
This incident underscores persistent AI safety challenges:
Hallucination Persists: Despite GPT-5’s claimed 80% reduction in factual errors versus older models, inaccuracies linger, especially in specialized domains like health.
User Interpretation Risks: Those without medical expertise may miss nuances. As lead case author Dr. Anika Patel noted, “AI lacks a clinician’s ability to probe patient history or recognize dangerous assumptions.”
Regulatory Gaps: No enforced standards exist for AI health advice validation, leaving users unprotected.
OpenAI’s GPT-5, released August 7, 2025, promises improved health query handling with proactive risk warnings and geographic tailoring. Yet, the company still emphasizes human oversight: “Do not rely on output as a sole source of truth”.
AI’s potential in healthcare is significantfrom streamlining diagnostics to patient education, but this case reveals critical boundaries. “The solution isn’t to abandon AI,” says Mount Sinai’s Dr. Girish Nadkarni, “but to engineer tools that spot dubious input and ensure human oversight remains central”. For now, experts urge users to:
-
Verify AI health advice with licensed professionals
-
Disclose AI use when seeking medical care
-
Treat chatbots as brainstorming aides, not authorities
As large language models evolve, integrating real-time fact-checking and contextual guardrails could prevent tragedies like this. Until then, the burden falls on users and developers alike to navigate AI’s promises and perils with caution.
Subscribe to my whatsapp channel
Comments are closed.