A 60-year-old man arrived at a Seattle hospital convinced his neighbor was poisoning him. Though medically stable at first, he soon developed hallucinations and paranoia. The cause turned out to be bromide toxicity—triggered by a health experiment he began after consulting ChatGPT. The case, published in Annals of Internal Medicine: Clinical Cases, highlights a rare but reversible form of psychosis that may have been influenced by generative artificial intelligence.

Psychosis is a mental state characterized by a disconnection from reality. It often involves hallucinations, where people hear, see, or feel things that are not there, or delusions, which are fixed beliefs that persist despite clear evidence to the contrary. People experiencing psychosis may have difficulty distinguishing between real and imagined experiences, and may behave in ways that seem irrational or confusing to others.

Psychosis is not a diagnosis in itself, but a symptom that can appear in a variety of medical and psychiatric conditions, including schizophrenia, bipolar disorder, and severe depression. It can also be triggered by brain injuries, infections, or toxic substances.

The man’s initial presentation was unusual but not dramatic. He came to the emergency department reporting that his neighbor was trying to poison him. His vital signs and physical examination were mostly normal. Routine laboratory tests, however, revealed some striking abnormalities: extremely high chloride levels, a highly negative anion gap, and severe phosphate deficiency. Despite this, he denied using any medications or supplements.

As doctors searched for answers, his mental state worsened. Within a day, he was hallucinating and behaving erratically. He had to be placed on a psychiatric hold and was started on risperidone to manage his symptoms. But a deeper look at his bloodwork suggested a rare toxic condition: bromism.

Bromism occurs when bromide—a chemical similar to chloride—builds up in the body to toxic levels. Historically, bromide was used in sedatives and other medications, but it was phased out in the United States by the late 1980s. It is still used in some industrial and cleaning applications. In cases of bromism, bromide can interfere with chloride tests and cause neurological and psychiatric symptoms ranging from confusion to full-blown psychosis.

The man’s doctors consulted with Poison Control and eventually confirmed that bromism was the likely cause of his symptoms. After being stabilized with fluids and nutritional support, the man revealed a key detail: for the past three months, he had been replacing regular table salt with sodium bromide. His motivation was nutritional—he wanted to eliminate chloride from his diet, based on what he believed were harmful effects of sodium chloride.

This belief was strengthened, he explained, by information he had received from ChatGPT. While experimenting with ways to improve his health, he asked the chatbot whether chloride could be replaced. The model reportedly offered bromide as an alternative, without flagging any health risks or asking why the substitution was being considered. Encouraged by what he interpreted as scientific endorsement, he purchased sodium bromide online and began consuming it regularly.

Medical tests confirmed that his blood bromide level had reached 1700 mg/L—well above the normal range of 0.9 to 7.3 mg/L. After stopping the supplement and receiving supportive treatment, his psychiatric symptoms gradually subsided. He was weaned off risperidone before discharge and remained stable in follow-up appointments.

This unusual incident carries several caveats. A single case report cannot establish causation. There may have been multiple factors contributing to the man’s psychosis, and his exact interaction with ChatGPT remains unverified. The medical team does not have access to the chatbot conversation logs and cannot confirm the exact wording or sequence of messages that led to the decision to consume bromide.

Case reports, by nature, describe only one patient’s experience. They are not designed to test hypotheses or rule out alternative explanations. In some cases, rare outcomes may simply be coincidental or misunderstood. Without controlled studies or broader surveillance, it is difficult to know how common—or uncommon—such incidents truly are.

Despite their limitations, case reports often serve as early warning signs in medicine. They tend to highlight novel presentations, unexpected side effects, or emerging risks that are not yet widely recognized. Many medical breakthroughs and safety reforms have started with a single unusual case. In this instance, the authors argue that the use of AI-powered chatbots should be considered when evaluating unusual psychiatric presentations, especially when patients are known to seek health advice online.

The case also raises broader concerns about the growing role of generative AI in personal health decisions. Chatbots like ChatGPT are trained to provide fluent, human-like responses. But they do not understand context, cannot assess user intent, and are not equipped to evaluate medical risk. In this case, the bot may have listed bromide as a chemical analogue to chloride without realizing that a user might interpret that information as a dietary recommendation.

The idea that chatbots could contribute to psychosis once seemed speculative. But recent editorials and anecdotal reports suggest that this may be a real, if rare, phenomenon—especially among individuals with underlying vulnerability. Danish psychiatrist Søren Dinesen Østergaard was among the first to raise the alarm. In 2023, he published a warning in Schizophrenia Bulletin, suggesting that the cognitive dissonance of interacting with a seemingly intelligent but ultimately mechanical system could destabilize users who already struggle with reality-testing.

Since then, multiple stories have emerged of individuals who experienced dramatic changes in thinking and behavior after prolonged chatbot use. Some became convinced that they had divine missions, while others believed they were communicating with sentient beings. In one reported case, a man believed he had been chosen by ChatGPT to “break” a simulated reality. In another, a user’s romantic partner came to believe that the chatbot was a spiritual guide and began withdrawing from human relationships.

These stories tend to follow a pattern: intense engagement with the chatbot, increasingly eccentric beliefs, and a lack of pushback from the system itself. Critics point out that language models are trained to reward user satisfaction, which can mean agreeing with or amplifying the user’s worldview—even when it is distorted or delusional. That dynamic may mimic what psychiatrists call confirmation bias, a known contributor to psychotic thinking.

Some developers are exploring ways to detect when a conversation appears to touch on delusional thinking—such as references to secret messages or supernatural identity—and redirect users to professional help. But such systems are still in their infancy, and the commercial incentives for chatbot companies tend to prioritize engagement over safety.

The case of the Seattle man is a sobering reminder of how even a seemingly minor substitution—replacing salt with a chemical cousin—can spiral into a medical and psychiatric emergency when guided by decontextualized information. While AI chatbots have potential to support healthcare in structured settings, this report suggests they may also present hidden risks, especially for users who take their advice literally.

The study, “A Case of Bromism Influenced by Use of Artificial Intelligence,” was authored by Audrey Eichenberger, Stephen Thielke, and Adam Van Buskirk.