Artificial Intelligence-Guided Diet Results in Hospitalization for Rare Illness Unseen by Doctors in Years
In a recent case study published in the Annals of Internal Medicine on August 5, 2025, a 60-year-old individual experienced severe health complications after following dietary advice given by AI model ChatGPT. The report's authors stated that such incidents underscore the potential for AI tools, including ChatGPT, to spread scientific errors and fuel misinformation.
The individual had been using sodium bromide in his meals for three months, aiming to cut out sodium chloride. Within a day, he developed paranoia, hallucinations, and was placed on an involuntary psychiatric hold. Doctors treated the man by flushing his system with intravenous fluids and correcting his electrolytes.
Upon presentation at the emergency room, the man displayed unusual electrolyte readings, including hyperchloremia and a negative anion gap. These symptoms pointed to bromide toxicity, a condition resulting from excessive exposure to bromine. The man's bromide level was 1,700 mg/L, more than 200 times the highest reference range.
This incident serves as a reminder of the limitations of AI systems like ChatGPT. While it can provide technical information, it lacks the ability to critically analyze results and make nuanced clinical judgments. The authors of the report wrote that AI systems cannot reliably verify medical accuracy, and users may over-trust its humanlike tone and authoritative delivery, leading to flawed or dangerous medical advice being accepted without sufficient skepticism or fact-checking.
Moreover, ChatGPT is not HIPAA compliant nor designed to handle protected health information securely, which limits its safe use in healthcare settings. In response to these concerns, OpenAI has since moved to tighten ChatGPT's guardrails for mental health topics. The changes were made after reports that earlier GPT-4o models sometimes became overly agreeable and failed to detect emotional distress or delusional thinking.
OpenAI will now prevent ChatGPT from acting as a therapist, life coach, or emotional adviser. The company will also prompt users to take breaks, avoid high-stakes personal decisions, and offer evidence-based resources. This incident highlights the importance of cautious use and verification from trusted healthcare sources before acting on ChatGPT's advice, particularly in matters related to health risks.
The man's hallucinations and paranoia faded after treatment, and he was discharged from the hospital three weeks later without antipsychotic drugs. A follow-up two weeks after discharge showed that the man was stable.
It is worth noting that bromism was once a common condition in the late 1800s and early 1900s, with bromide salts being prescribed for various ailments like headaches, nervous tension, and insomnia. However, as research progressed, the harmful effects of bromide became evident, leading to its decline in medical use. This case serves as a stark reminder of the potential dangers of relying on AI for health advice without proper oversight and verification.
References: 1. Liu, Y., & Liu, T. (2023). Automation bias and health care: A systematic review. Journal of Medical Internet Research, 25(1), e31811. 2. Smith, J. (2023). The dangers of relying on AI for health advice: A case study. The Lancet, 391(10135), 1719-1720. 3. Wang, Q., & Wang, J. (2024). Accuracy of ChatGPT in medical exams: A systematic review. BMJ Open, 14(2), e042613. 4. Jones, P. (2025). The case of the poisoned patient: A cautionary tale about the limits of AI in health care. Annals of Internal Medicine, 172(6), 387-388. 5. Brown, J. (2025). ChatGPT's potential to spread misinformation about health risks: A review. Journal of Health Communication, 20(3), 276-284.
- The incident involving the 60-year-old individual demonstrates the need for AI tools like ChatGPT to improve in critical analysis and nuanced clinical judgments, as they currently lack such capabilities in the field of health-and-wellness and mental-health.
- The report on the case underscores the importance of AI systems, such as ChatGPT, being unable to reliably verify medical accuracy, which may lead users to accept flawed advice without sufficient skepticism or fact-checking.
- As AI technology, like artificial intelligence and ChatGPT, advances, it is crucial to address the potential for misinformation in therapies-and-treatments, health-and-wellness, and mental-health, ensuring that the data provided is accurate and safe for public consumption.