{"id":50931,"date":"2025-09-08T14:20:08","date_gmt":"2025-09-08T14:20:08","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/50931\/"},"modified":"2025-09-08T14:20:08","modified_gmt":"2025-09-08T14:20:08","slug":"the-right-and-wrong-ways-for-patients-to-use-chatbots","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/50931\/","title":{"rendered":"The right \u2014 and wrong \u2014 ways for patients to use chatbots"},"content":{"rendered":"<p>At 3 a.m., everything feels worse. That unfamiliar ache in your side? Not awful, but persistent. You know better than to panic-search symptoms \u2014 but your doctor\u2019s office won\u2019t open for hours, and Google catastrophizes more than it clarifies. So you ask a chatbot instead.<\/p>\n<p>What you get isn\u2019t just information. You get a story.<\/p>\n<p>That\u2019s what makes this moment different from the past 20 years of digital health-seeking, when patients would turn to WebMD or \u201cDr. Google,\u201d often to their doctors\u2019 dismay. Now, instead of searching, patients are using technology to shape explanations. Generative AI tools don\u2019t just summarize; they simulate conversation. They let people organize thoughts, explore outcomes, and rehearse how they\u2019ll describe what they\u2019re feeling. The result isn\u2019t a diagnosis. It\u2019s a draft.<\/p>\n<p>And that draft is already changing what happens in the exam room.<\/p>\n<p>More people are turning to these tools than most clinicians realize. A recent <a href=\"https:\/\/www.kff.org\/health-information-trust\/poll-finding\/kff-health-misinformation-tracking-poll-artificial-intelligence-and-health-information\/\" target=\"_blank\" rel=\"noopener nofollow\">KFF Health Tracking Poll<\/a> found that 17% of U.S. adults \u2014 and a higher percentage of younger adults \u2014 have used generative tools to ask health-related questions. The behavior is already here. What matters now is understanding how it\u2019s shaping the conversation.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/09\/AdobeStock_296418364-768x432.jpeg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/08\/20\/chatbot-mental-health-regulation-woebot-labels\/\" rel=\"nofollow noopener\" target=\"_blank\">AI should come with green, yellow, and red lights for mental health<\/a><\/p>\n<p>As a clinical psychologist, I\u2019ve spent decades helping people make sense of confusion and anxiety. One of the most powerful tools in therapy is narrative: not just what happened, but how it\u2019s told. People don\u2019t simply recall events \u2014 they craft them into meaning. A rehearsed story feels true, even when it\u2019s not. That\u2019s the shift we\u2019re now seeing in health care: uncertainty processed not through facts alone, but through fluent, plausible, practiced explanations.<\/p>\n<p>Years ago, not long after search engines became part of everyday life, my family physician remarked that many of his patients now came in \u201cknowing more about medicine than they ever did.\u201d His job, he said, was no longer to deliver information \u2014 it was to help them interpret it. He had excellent social instincts and clinical wisdom, and even then, he could sense that the patient role was shifting.<\/p>\n<p>That shift continues today \u2014 but with a deeper twist. Patients aren\u2019t just arriving with facts. They\u2019re arriving with shaped, rehearsed stories.<\/p>\n<p>Symptom searching isn\u2019t new. What\u2019s new is the ability to interact, refine, and rehearse. Patients arrive not with scattered complaints, but with structured narratives \u2014 written down, thought through, sometimes emotionally processed. That structure shapes the conversation. It changes what gets shared, how it\u2019s framed, and how open a patient might be to hearing something different.<\/p>\n<p>In a recent <a href=\"https:\/\/jamanetwork.com\/journals\/jama\/article-abstract\/2836827\" target=\"_blank\" rel=\"noopener nofollow\">JAMA essay<\/a> a physician described a patient who came in with dizziness and used strikingly clinical language: \u201cIt\u2019s not vertigo, more of a presyncope kind of feeling.\u201d When the doctor asked if she worked in health care, the patient said no \u2014 she\u2019d used a chatbot to prepare for the appointment.<\/p>\n<p>\u201cIt felt like someone else was in the room,\u201d the doctor wrote.<\/p>\n<p>That line captures where we are. When a patient feels confident in a story \u2014 even a wrong one \u2014 it becomes harder to revise. That\u2019s the new clinical challenge: not just gathering history, but renegotiating it. If a tool steers someone toward a reassuring but inaccurate explanation, a clinician may have to reopen uncertainty that feels already resolved.<\/p>\n<p>A friend of mine used a chatbot to understand her persistent nausea. It suggested indigestion or stress. Reassured, she delayed seeking care. A week later, her doctor diagnosed gallstones. The tool hadn\u2019t been blatantly wrong \u2014 but its tone and narrative coherence diverted her attention and cost her time.<\/p>\n<p>Another colleague told me about a teenager who came in with low mood and fatigue. Before the visit, she\u2019d turned to a chatbot and concluded she had a dietary deficiency. By the time she reached the clinic, she was already taking supplements and resistant to exploring emotional factors. The tool hadn\u2019t dismissed mental health \u2014 it had simply offered an explanation she preferred, and that subtly rerouted the encounter.<\/p>\n<p>This is the risk: fluency that feels like accuracy. In structured settings, generative tools can perform impressively. GPT-4, for instance, has scored over 90% on standardized clinical vignettes \u2014 outperforming physicians in some studies focused on <a href=\"https:\/\/doi.org\/10.1080\/00015385.2024.2303528\" target=\"_blank\" rel=\"noopener nofollow\">cardiac symptom triage<\/a> and <a href=\"https:\/\/doi.org\/10.1038\/s41746-025-01486-5\" target=\"_blank\" rel=\"noopener nofollow\">complex gastrointestinal cases<\/a>. But in real-world use by laypeople, accuracy drops sharply. A <a href=\"https:\/\/arxiv.org\/abs\/2504.18919\" target=\"_blank\" rel=\"noopener nofollow\">2025 preprint<\/a> found that diagnostic accuracy fell to around 35% when used without clinical framing. The tool isn\u2019t failing \u2014 the question often is.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/09\/AIPsychosis__Illustration_MollyFerguson_082625-768x432.jpg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/09\/02\/ai-psychosis-delusions-explained-folie-a-deux\/\" rel=\"nofollow noopener\" target=\"_blank\">As reports of \u2018AI psychosis\u2019 spread, clinicians scramble to understand how chatbots can spark delusions<\/a><\/p>\n<p>But it doesn\u2019t have to be this way. I\u2019ve spoken with friends, family members, and patients about how they use these tools when preparing for visits. I\u2019ve seen how much the output changes when users shift their inputs from vague to focused \u2014 from \u201cWhat\u2019s wrong with me?\u201d to \u201cHere\u2019s what I\u2019m feeling, here\u2019s what I\u2019m worried about, and here\u2019s what I want to ask my doctor.\u201d<\/p>\n<p>The difference is not about getting a better answer. It\u2019s about organizing thought. Prompts like \u201cI\u2019m preparing to see my doctor and want to describe my symptoms clearly\u201d often lead to structured, practical guidance: timelines, symptom tracking, questions to raise. That kind of organization doesn\u2019t replace care \u2014 but it can reshape it.<\/p>\n<p>The issue isn\u2019t just what these tools can do. It\u2019s whether people are taught how to use them well. Most patients don\u2019t know what memory functions are. Many don\u2019t realize that unclear questions lead to unclear \u2014 and sometimes misleading \u2014 replies. What\u2019s needed is not technical training, but conversational guidance.<\/p>\n<p>Clinicians can help. Instead of pretending patients arrive with a blank slate, they should ask: \u00a0<\/p>\n<ul class=\"wp-block-list\">\n<li>\u201cDid you look anything up before coming in?\u201d<\/li>\n<li>\u201cDid you use any tools to think it through?\u201d <\/li>\n<li>\u201cWhat were you hoping the problem might be \u2014 or hoping it wasn\u2019t?\u201d<\/li>\n<\/ul>\n<p>These questions invite the story that\u2019s already formed. They help uncover the mental path a patient has already walked \u2014 and create space for co-creation rather than correction.<\/p>\n<p>Health systems and digital platforms can support this shift too. Patient portals could offer pre-visit templates or example prompts. Providers could share safe-use language or recommend framing strategies. Even small nudges \u2014 like suggesting users say, \u201cHelp me prepare for my visit\u201d instead of \u201cWhat\u2019s wrong with me?\u201d \u2014 can lead to more productive interactions.<\/p>\n<p>We should be teaching people how to build personal health narratives \u2014 drafts that are clear but revisable, reflective but not prematurely certain. The goal isn\u2019t to limit autonomy. It\u2019s to preserve flexibility, so the story that gets told can still change when it needs to.<\/p>\n<p>Patients used to arrive with scattered symptoms and search history. Now, many arrive with a narrative already formed. That story can be useful. It can be misleading. Either way, it changes the conversation.<\/p>\n<p>The era of raw symptom search is over. This is the era of narrative rehearsal. And if clinicians don\u2019t start listening for the stories patients have already begun to tell themselves, they\u2019ll lose the chance to shape how those stories end.<\/p>\n<p>Harvey Lieberman, Ph.D., is a clinical psychologist and consultant who has led major mental health programs and now writes on the intersection of care and technology. His recent New York Times guest essay, \u201c<a href=\"https:\/\/www.nytimes.com\/2025\/08\/01\/opinion\/chatgpt-therapist-journal-ai.html\" target=\"_blank\" rel=\"noopener nofollow\">I\u2019m a Therapist. ChatGPT Is Eerily Effective<\/a>,\u201d explored his year-long experiment using AI in therapy.<\/p>\n","protected":false},"excerpt":{"rendered":"At 3 a.m., everything feels worse. That unfamiliar ache in your side? Not awful, but persistent. You know&hellip;\n","protected":false},"author":2,"featured_media":50932,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[275],"tags":[289,18,135,475,474,19,17,167,3283,3284],"class_list":{"0":"post-50931","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-eire","10":"tag-health","11":"tag-health-care","12":"tag-healthcare","13":"tag-ie","14":"tag-ireland","15":"tag-mental-health","16":"tag-patients","17":"tag-physicians"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/50931","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=50931"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/50931\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/50932"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=50931"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=50931"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=50931"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}