What started as a harmless attempt to improve his diet left a 60-year-old man hospitalised for three weeks, battling hallucinations and an ailment virtually extinct in modern medicine bromism. The condition, once common in the Victorian era, has now reappeared in a highly unusual case reported on August 5, 2025, in the Annals of Internal Medicine.
The patient had been seeking ways to cut back on table salt and turned to ChatGPT for alternatives. The AI suggested sodium bromide a compound better known for sanitising swimming pools than flavouring food. Taking the advice at face value, the man began substituting it for regular salt, purchasing the chemical online and using it for three months in a bid to eliminate chloride from his diet.
From Kitchen Experiment to Psychiatric Crisis
Previously healthy with no psychiatric history, the man eventually arrived at the emergency department claiming his neighbour was trying to poison him. Tests revealed abnormal electrolyte readings including hyperchloremia and a negative anion gap prompting suspicion of bromide poisoning.
Within a day, his symptoms escalated: paranoia deepened, hallucinations became both visual and auditory and he was placed under an involuntary psychiatric hold. Physicians later learned he had also been experiencing fatigue, poor sleep, acne, mild coordination problems and intense thirst all classic indicators of bromism.
A Forgotten Diagnosis
Bromism was widespread in the late 19th and early 20th centuries, when bromide salts were prescribed for headaches, insomnia and anxiety. At its height, it accounted for up to 8% of psychiatric hospital admissions. The U.S. FDA banned bromide in ingestible products between 1975 and 1989, making cases today extremely rare. Bromide accumulates in the body over time, causing neurological, psychiatric and skin-related issues. In this case, the man’s bromide concentration reached 1,700 mg/L more than 200 times the safe threshold.
The AI Connection
When researchers repeated the man’s query using ChatGPT 3.5, the chatbot again recommended bromide as a sodium chloride substitute. While it noted that context mattered, it failed to give a clear toxicity warning or inquire about the purpose of the substitution something a trained clinician would typically do.
The case authors caution that while AI can spread medical knowledge quickly, it can also provide unsafe, context-free suggestions. “AI tools may produce scientific errors, lack the ability to critically interpret results and inadvertently amplify misinformation,” the report warned.
Treatment and Lessons Learned
With aggressive IV fluids and electrolyte correction, the patient’s symptoms resolved and lab results normalised. He was discharged after three weeks and remained stable without antipsychotic medication at a two-week follow-up.
The episode stands as a stark reminder: AI-generated advice is not a substitute for medical expertise and pool chemicals have no place in the kitchen.
OpenAI Introduces Stricter Mental Health Safeguards
In response to rising concerns over the emotional and physical risks of AI in personal wellbeing, OpenAI announced on August 4 a series of new restrictions on ChatGPT’s handling of mental health-related queries. The company said the chatbot will no longer act as a therapist, emotional support figure, or life coach and will instead offer evidence-based resources, encourage breaks and avoid advice on high-stakes personal decisions.
The move follows criticism that earlier GPT-4o versions could be overly agreeable, offering reassurance instead of practical or safe recommendations. USA Today reported that OpenAI has acknowledged rare but serious incidents where the chatbot failed to identify signs of emotional distress or delusional thinking. Research cited by The Independent further highlights the risk of AI misjudging crisis situations due to its inability to interpret human emotions with true nuance.




