{"id":356013,"date":"2025-08-19T05:32:15","date_gmt":"2025-08-19T05:32:15","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/356013\/"},"modified":"2025-08-19T05:32:15","modified_gmt":"2025-08-19T05:32:15","slug":"ai-is-the-new-dr-google-across-the-globe","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/356013\/","title":{"rendered":"AI Is the New Dr Google \u2014 Across the Globe"},"content":{"rendered":"<p>Patients arriving at appointments with researched information is not new, but artificial intelligence (AI) tools such as ChatGPT are changing the dynamics.<\/p>\n<p>Their confident presentation can leave physicians feeling that their expertise is challenged. Kumara Raja Sundar, MD, a family medicine physician at Kaiser Permanente Burien Medical Center in Burien, Washington, highlighted this trend in a recent article <a href=\"https:\/\/jamanetwork.com\/journals\/jama\/article-abstract\/2836827#google_vignette\" rel=\"nofollow noopener\" target=\"_blank\">published in JAMA<\/a>.<\/p>\n<p>A patient visited Sundar\u2019s clinic reporting dizziness and described her symptoms with unusual precision: \u201cIt\u2019s not vertigo, but more like a presyncope feeling.\u201d She then suggested that the tilt table test might be useful for diagnosis.<\/p>\n<p>Occasionally, patient questions reveal subtle familiarity with medical jargon. This may indicate that they either have relevant training or have studied the subject extensively.\u00a0<\/p>\n<p>Curious, Sundar asked if she worked in the healthcare sector. She replied that she had consulted ChatGPT, which recommended the tilt table test.<\/p>\n<p>For years, patients have brought newspaper clippings, internet research, and advice from friends and relatives to consultations.<\/p>\n<p>Suggestions shared in WhatsApp groups have become a regular part of clinical discussions. Sundar noted that this particular <a href=\"https:\/\/jamanetwork.com\/journals\/jama\/article-abstract\/2836827\" rel=\"nofollow noopener\" target=\"_blank\">encounter was different<\/a>.<\/p>\n<p>The patient\u2019s tone and level of detail conveyed competence, and the confidence with which she presented the information subtly challenged his clinical judgment and treatment plans.<\/p>\n<p><strong>Clinical Practice<\/strong><\/p>\n<p>It is not surprising that large language models (LLMs), such as ChatGPT, are appealing. Recent studies have confirmed their remarkable strengths in logical reasoning and interpersonal communication.<\/p>\n<p>However, a direct comparison between LLMs and physicians is unfair. <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/37389857\/\" rel=\"nofollow noopener\" target=\"_blank\">Clinicians often face<\/a> immense pressure, including constrained consultation times, overflowing inboxes, and a healthcare system that demands productivity and efficiency.<\/p>\n<p>Even skilled professionals struggle to perform optimally under adverse conditions.<\/p>\n<p>In contrast, generative AI is functionally limitless. This imbalance creates an unrealistic benchmark; however, this is today\u2019s reality.<\/p>\n<p>Patients want clear answers; more importantly, they want to feel heard, understood, and reassured.<\/p>\n<p>Patients value accurate information but also want to feel recognized, reassured, and heard. \u201cUnfortunately, under the weight of competing demands, which is what often slips for me not just because of systemic constraints but also because I am merely human,\u201d Sundar wrote.<\/p>\n<p>Despite the capabilities of generative AI, patients still visit doctors. Though these tools deliver confidently worded suggestions, they inevitably conclude: \u201cConsult a healthcare professional.\u201d<\/p>\n<p>The ultimate responsibility for liability, diagnostics, prescriptions, and sick notes remains with physicians.<\/p>\n<p><strong>Patient Interaction<\/strong><\/p>\n<p>In practice, this means dealing with requests, such as a tilt table test for intermittent vertigo, a procedure that is not uncommon but often inappropriate.<\/p>\n<p>\u201cI find myself explaining concepts such as overdiagnosis, false positives, or other risks of unnecessary testing. At best, the patient understands the ideas, which may not resonate when one is experiencing symptoms. At worst, I sound dismissive. There is no function that tells ChatGPT that clinicians lack routine access to tilt-table testing or that echocardiogram appointments are delayed because of staff shortages. I have to carry those constraints into the examination room while still trying to preserve trust,\u201d Sundar emphasized in his article.<\/p>\n<p>When I speak with medical students, I notice a different kind of paternalism creeping in. And I have caught it in my inner monologue, even if I do not say it aloud. The old line, \u201cThey probably WebMD\u2019d it and think they have cancer,\u201d has morphed into the newer, just-as-dismissive line, \u201cThey probably ChatGPT\u2019d it and are going to tell us what to order.\u201d\u00a0<\/p>\n<p>It often reflects defensiveness from clinicians rather than genuine engagement and carries an implicit message: We still know the best. \u201cIt is an attitude that risks eroding sacred and fragile trust between clinicians and patients. It reinforces the feeling that we are not \u2018in it\u2019 with our patients and are truly gatekeeping rather than partnering. Ironically, that is often why I hear patients turn to LLMs in the first place,\u201d Sundar concluded.<\/p>\n<p><strong>Patient Advocacy<\/strong><\/p>\n<p>One patient said plainly, \u201cThis is how I can advocate for myself better.\u201d The word \u201cadvocate\u201d struck Sundar, capturing the effort required to persuade someone with more authority. Although clinicians still control access to tests, referrals, and treatment plans, the term conveys a sense of preparing for a fight.<\/p>\n<p>When patients feel unheard, gathering knowledge becomes a strategy to be taken seriously.<\/p>\n<p>In such situations, the usual approach of explaining false-positive test results, overdiagnosis, and test characteristics is often ineffective. From the patient\u2019s perspective, this sounds more like, \u201cI still know more than you, no matter what tool you used, and I\u2019m going to overwhelm you with things you don\u2019t understand.\u201d<\/p>\n<p><strong>Physician Role<\/strong><\/p>\n<p>The role of physicians is constantly evolving. The transition from physician-as-authority to physician-as-advisor is intensifying. Patients increasingly present with expectations shaped by nonevidence-based sources, often misaligned with the clinical reality. As Sundar observed, \u201cThey arm themselves with knowledge to be heard.\u201d This necessitates a professional duty to respond with understanding rather than resistance.<\/p>\n<p>His approach centers on emotional acknowledgment before clinical discussion: \u201cI say, \u2018We\u2019ll discuss diagnostic options together. But first, I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan.\u2019\u201d He emphasized, \u201cThis acknowledgment was the real door opener.\u201d<\/p>\n<p><strong>Global Trend<\/strong><\/p>\n<p>What began as a US trend observed by Sundar has now spread worldwide, with patients increasingly arriving at consultations armed with medical knowledge from tools like ChatGPT rather than just \u201cDr Google.\u201d<\/p>\n<p>Clinicians across health systems have reported that digitally informed patients now comprise the majority.<\/p>\n<p>In a forum discussion, physicians from various disciplines shared their experiences, highlighting how previously informed patients are now the norm. Inquiries often focus on specific laboratory values, particularly vitamin D or hormone tests. In gynecologic consultations, Internet research on menstrual disorders has become a routine part of patient interactions, with an overwhelming range of answers available online.<\/p>\n<p>\u2018Chanice,\u2019 a Coliquio user who\u2019s a gynecologist, shared, \u201cThe answers range from, \u2018It\u2019s normal; it can happen\u2019 to \u2018You won\u2019t live long.\u2019\u201d \u201cIt\u2019s also common to Google medication side effects, and usually, women end up experiencing pretty much every side effect, even though they didn\u2019t have them before.\u201d<\/p>\n<p>How should doctors respond to this trend? Opinions are clear: openness, education, and transparency are essential and ideally delivered in a structured manner.<\/p>\n<p>\u201cGet the patients on board; educate them. In writing! Each and every one of them. Once it\u2019s put into words, it\u2019s no longer a job. Invest time in educating patients to correct misleading promises made by health insurance companies and politicians,\u201d commented another user, J\u00f6rg Christian Nast, a specialist in gynecology and obstetrics.<\/p>\n<p>The presence of digitally informed patients is increasingly seen not only as a challenge but also as an opportunity. Conversations with these patients can be constructive, but they can also generate unrealistic demands or heated debates.<\/p>\n<p>Thus, a professional, calm, and explanatory approach remains crucial, and at times, a dose of humor can help. Another user who specializes in internal medicine added, \u201cThe term \u2018online consultation\u2019 takes on a whole new meaning.\u201d<\/p>\n<p>The full forum discussion, \u201cThe Most Frequently Asked \u2018Dr. Google\u2019 Questions,\u201d can be found <a href=\"https:\/\/www.coliquio.de\/austausch\/684a9f6274398ab246d34089\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>.<\/p>\n<p>Find out what young physicians think about AI and the evolving doctor-patient relationship in our interview with Christian Becker, MD, MHBA, University Medical Center G\u00f6ttingen, G\u00f6ttingen, Germany, and a spokesperson for the Young German Society for Internal Medicine.<\/p>\n<p>Read the full interview <a href=\"https:\/\/www.coliquio.de\/content\/dgim\/ki-der-medizin-aus-sicht-der-jungen-aerztegeneration-51380\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>.<\/p>\n<p>This story was translated from <a href=\"https:\/\/www.coliquio.de\/content\/leben-als-aerztin-oder-arzt\/wenn-patienten-alle-antworten-parat-haben-52624\" rel=\"nofollow noopener\" target=\"_blank\">Coliquio<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"Patients arriving at appointments with researched information is not new, but artificial intelligence (AI) tools such as ChatGPT&hellip;\n","protected":false},"author":2,"featured_media":356014,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[8201,323,1942,55128,18187,47892,84240,3725,105,20909,4348,301,3082,3690,55127,55126,105093,8200,112421,5332,16,15,125501],"class_list":{"0":"post-356013","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-adverse-effects","9":"tag-ai","10":"tag-artificial-intelligence","11":"tag-artificial-neural-networks","12":"tag-blood","13":"tag-board-certification","14":"tag-cardiovascular-imaging-cardiac-imaging-cv-imaging","15":"tag-deep-learning","16":"tag-health","17":"tag-health-insurance","18":"tag-healthcare","19":"tag-heart","20":"tag-internet","21":"tag-machine-learning","22":"tag-ml-natural-language-processing","23":"tag-npl","24":"tag-otolaryngology-ent-specialty-ent-speciality","25":"tag-patient-safety","26":"tag-revenue-and-practice-management-practice-management-revenue","27":"tag-side-effects","28":"tag-uk","29":"tag-united-kingdom","30":"tag-vertigo"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115053798597032476","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/356013","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=356013"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/356013\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/356014"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=356013"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=356013"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=356013"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}