Patients arriving at appointments with researched information is not new, but artificial intelligence (AI) tools such as ChatGPT are changing the dynamics.
Their confident presentation can leave physicians feeling that their expertise is challenged. Kumara Raja Sundar, MD, a family medicine physician at Kaiser Permanente Burien Medical Center in Burien, Washington, highlighted this trend in a recent article published in JAMA.
A patient visited Sundar’s clinic reporting dizziness and described her symptoms with unusual precision: “It’s not vertigo, but more like a presyncope feeling.” She then suggested that the tilt table test might be useful for diagnosis.
Occasionally, patient questions reveal subtle familiarity with medical jargon. This may indicate that they either have relevant training or have studied the subject extensively.
Curious, Sundar asked if she worked in the healthcare sector. She replied that she had consulted ChatGPT, which recommended the tilt table test.
For years, patients have brought newspaper clippings, internet research, and advice from friends and relatives to consultations.
Suggestions shared in WhatsApp groups have become a regular part of clinical discussions. Sundar noted that this particular encounter was different.
The patient’s tone and level of detail conveyed competence, and the confidence with which she presented the information subtly challenged his clinical judgment and treatment plans.
Clinical Practice
It is not surprising that large language models (LLMs), such as ChatGPT, are appealing. Recent studies have confirmed their remarkable strengths in logical reasoning and interpersonal communication.
However, a direct comparison between LLMs and physicians is unfair. Clinicians often face immense pressure, including constrained consultation times, overflowing inboxes, and a healthcare system that demands productivity and efficiency.
Even skilled professionals struggle to perform optimally under adverse conditions.
In contrast, generative AI is functionally limitless. This imbalance creates an unrealistic benchmark; however, this is today’s reality.
Patients want clear answers; more importantly, they want to feel heard, understood, and reassured.
Patients value accurate information but also want to feel recognized, reassured, and heard. “Unfortunately, under the weight of competing demands, which is what often slips for me not just because of systemic constraints but also because I am merely human,” Sundar wrote.
Despite the capabilities of generative AI, patients still visit doctors. Though these tools deliver confidently worded suggestions, they inevitably conclude: “Consult a healthcare professional.”
The ultimate responsibility for liability, diagnostics, prescriptions, and sick notes remains with physicians.
Patient Interaction
In practice, this means dealing with requests, such as a tilt table test for intermittent vertigo, a procedure that is not uncommon but often inappropriate.
“I find myself explaining concepts such as overdiagnosis, false positives, or other risks of unnecessary testing. At best, the patient understands the ideas, which may not resonate when one is experiencing symptoms. At worst, I sound dismissive. There is no function that tells ChatGPT that clinicians lack routine access to tilt-table testing or that echocardiogram appointments are delayed because of staff shortages. I have to carry those constraints into the examination room while still trying to preserve trust,” Sundar emphasized in his article.
When I speak with medical students, I notice a different kind of paternalism creeping in. And I have caught it in my inner monologue, even if I do not say it aloud. The old line, “They probably WebMD’d it and think they have cancer,” has morphed into the newer, just-as-dismissive line, “They probably ChatGPT’d it and are going to tell us what to order.”
It often reflects defensiveness from clinicians rather than genuine engagement and carries an implicit message: We still know the best. “It is an attitude that risks eroding sacred and fragile trust between clinicians and patients. It reinforces the feeling that we are not ‘in it’ with our patients and are truly gatekeeping rather than partnering. Ironically, that is often why I hear patients turn to LLMs in the first place,” Sundar concluded.
Patient Advocacy
One patient said plainly, “This is how I can advocate for myself better.” The word “advocate” struck Sundar, capturing the effort required to persuade someone with more authority. Although clinicians still control access to tests, referrals, and treatment plans, the term conveys a sense of preparing for a fight.
When patients feel unheard, gathering knowledge becomes a strategy to be taken seriously.
In such situations, the usual approach of explaining false-positive test results, overdiagnosis, and test characteristics is often ineffective. From the patient’s perspective, this sounds more like, “I still know more than you, no matter what tool you used, and I’m going to overwhelm you with things you don’t understand.”
Physician Role
The role of physicians is constantly evolving. The transition from physician-as-authority to physician-as-advisor is intensifying. Patients increasingly present with expectations shaped by nonevidence-based sources, often misaligned with the clinical reality. As Sundar observed, “They arm themselves with knowledge to be heard.” This necessitates a professional duty to respond with understanding rather than resistance.
His approach centers on emotional acknowledgment before clinical discussion: “I say, ‘We’ll discuss diagnostic options together. But first, I want to express my condolences. I can hardly imagine how you feel. I want to tackle this with you and develop a plan.’” He emphasized, “This acknowledgment was the real door opener.”
Global Trend
What began as a US trend observed by Sundar has now spread worldwide, with patients increasingly arriving at consultations armed with medical knowledge from tools like ChatGPT rather than just “Dr Google.”
Clinicians across health systems have reported that digitally informed patients now comprise the majority.
In a forum discussion, physicians from various disciplines shared their experiences, highlighting how previously informed patients are now the norm. Inquiries often focus on specific laboratory values, particularly vitamin D or hormone tests. In gynecologic consultations, Internet research on menstrual disorders has become a routine part of patient interactions, with an overwhelming range of answers available online.
‘Chanice,’ a Coliquio user who’s a gynecologist, shared, “The answers range from, ‘It’s normal; it can happen’ to ‘You won’t live long.’” “It’s also common to Google medication side effects, and usually, women end up experiencing pretty much every side effect, even though they didn’t have them before.”
How should doctors respond to this trend? Opinions are clear: openness, education, and transparency are essential and ideally delivered in a structured manner.
“Get the patients on board; educate them. In writing! Each and every one of them. Once it’s put into words, it’s no longer a job. Invest time in educating patients to correct misleading promises made by health insurance companies and politicians,” commented another user, Jörg Christian Nast, a specialist in gynecology and obstetrics.
The presence of digitally informed patients is increasingly seen not only as a challenge but also as an opportunity. Conversations with these patients can be constructive, but they can also generate unrealistic demands or heated debates.
Thus, a professional, calm, and explanatory approach remains crucial, and at times, a dose of humor can help. Another user who specializes in internal medicine added, “The term ‘online consultation’ takes on a whole new meaning.”
The full forum discussion, “The Most Frequently Asked ‘Dr. Google’ Questions,” can be found here.
Find out what young physicians think about AI and the evolving doctor-patient relationship in our interview with Christian Becker, MD, MHBA, University Medical Center Göttingen, Göttingen, Germany, and a spokesperson for the Young German Society for Internal Medicine.
Read the full interview here.
This story was translated from Coliquio.