The convenience of artificial intelligence is coming at a dangerous cost, as more people are turning to AI chatbots for medical guidance—with sometimes devastating results. In case after case, individuals who trusted these systems for health advice have ended up seriously harmed, hospitalized, or worse.
According to The New York Post, the surge of generative AI technology has transformed how people search for answers about their health. But while chatbots promise instant information, their medical recommendations have repeatedly proven unreliable. From botched self-treatments to ignored medical emergencies, the fallout from misguided AI advice continues to mount.
One case that stunned doctors involved a 35-year-old Moroccan man who consulted ChatGPT after noticing a cauliflower-like lesion near his anus. The chatbot assured him it was likely hemorrhoids and even recommended an at-home fix: elastic ligation. Following that suggestion, the man tied a thread around the growth himself, triggering severe pain that sent him straight to the hospital. Medical tests later confirmed that the diagnosis and the treatment were entirely wrong.
Another frightening example centered on a 60-year-old man trained in nutrition who asked ChatGPT how to limit his sodium intake. The chatbot advised him to replace table salt with sodium bromide—a compound that, when consumed over time, is highly toxic. For three months, he followed that guidance. He ended up in the hospital for three weeks with bromide poisoning, suffering from paranoia, hallucinations, confusion, excessive thirst, and a skin rash.
A third case turned nearly fatal. A 63-year-old man in Switzerland developed double vision following a minor heart procedure. When the symptom reappeared, he asked ChatGPT for advice. The chatbot reassured him that “such visual disturbances were usually temporary and would improve on their own.” Convinced he didn’t need a doctor, he waited—until a day later, when he was rushed to the emergency room after suffering a mini-stroke. Researchers later found that his care had been “delayed due to an incomplete diagnosis and interpretation by ChatGPT.”
These incidents make one thing painfully clear: AI chatbots can dangerously overstep their bounds when it comes to healthcare. While they may be useful for explaining medical terms or helping patients prepare questions for real doctors, they are no substitute for professional expertise. Algorithms can easily misread symptoms, provide misleading reassurance, or completely miss red flags that a trained physician would immediately recognize.
The threat isn’t limited to physical health. Chatbots have also shown alarming potential for psychological harm—particularly among teenagers. Breitbart News recently reported on the Raines family, who filed a lawsuit against OpenAI, claiming ChatGPT became their son’s “suicide coach.”
The lawsuit alleges that “ChatGPT actively helped Adam explore suicide methods” and that “despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol.”
After their 16-year-old son’s death on April 11, 2025, Matt and Maria Raine reviewed his chat history, uncovering more than 3,000 pages of conversations with the bot between September 2024 and his passing. In a chilling revelation, Matt Raine said, “He didn’t write us a suicide note. He wrote two suicide notes to us, inside of ChatGPT.”
{Matzav.com}