As soon as I spotted my patient in the waiting room, I knew that I’d be admitting him to the hospital. His breathing wasn’t necessarily heavy, but there was something in the contour of his respirations that caught my eye. Or maybe it was the expression on his face — eyes more quizzical than crinkling, lips more decisively hewn than usual.

Every doctor and nurse has had this experience — a glance at a patient and the instant recognition that something has run amok in his or her physiology. This is especially common in primary care medicine, where we know our patients for years, sometimes decades. We know their gait, their heart murmurs, their blood counts. And we know when something is amiss.

Like most doctors these days, I’ve been incorporating medical artificial intelligence tools into my practice. It’s become so easy to type in a quick description of an 86-year-old male with heart failure, diabetes and gout — toss in some test results, and see what the bot spits out. I appreciate that A.I. can expeditiously outline next steps for the clinical evaluation, or provide suggestions for rarer diagnoses or spit out a feisty appeal letter for an insurance denial. But the problem is that A.I. is evaluating only some statistical average of 86-year-old males with heart failure, diabetes and gout. It is not assessing that one specific 86-year-old man with these conditions whom I am looking at across the waiting room.

There’s an ocean of distance between the “patient” that A.I. is analyzing and the patient that the human doctor or nurse is assessing. Navigating the gap is something writers also grapple with. When making a diagnosis, as it were, of good writing to publish in the literary journal I edit, I look for characters that are fully realized, with physicality that is palpable and an emotional complexity both visceral and vivid. These details aren’t always made explicit, but pieced together in hints and subtle cues. What I’ve realized over the years is this is not so different from what a doctor has to do when assessing her patient’s health.

This is the inherent limitation of A.I. in medicine. It’s simply impossible — at least for now — for these tools to truly see the multidimensional patient. A.I. can’t know how the agony of a child estranged by substance use affects the blood pressure. It can’t factor in the economic and social crosscurrents that bear on medication adherence. It can’t account for the simmering grief of a lost spouse that influences a patient’s health decisions far more than any clinical guideline.

So while A.I. is a useful tool, particularly for pattern recognition and data organization, the “patients” it manages feel like stock characters who share check-box traits with actual patients in the same way a Harlequin romance heroine shares the same number of limbs as Anna Karenina. A.I. might be quick to spit out a treatment plan, and it might even be correct, but a clinician must decide how to make that treatment work for the specific person sitting in front of her, or whether to even treat at all. Such decisions do not readily follow an algorithm.

It’s easy to be book smart — our A.I. tools and databases may already have a monopoly on information processing — but it’s far harder to be wise and know how to apply the trove of knowledge at our disposal. As we train the next generation of doctors and nurses, we can worry a bit less about the smarts (they’re all smart enough, and they can look up whatever they need to know), but we have to help them cultivate the wisdom of what to do with all those facts, and how to guide individual patients through their very particular maelstroms of illness.

Understanding complexity of character is why medical humanities are arguably as important as A.I. skills for these rising clinicians. A.I. is optimized to provide us a seemingly certain answer, but clinical medicine is anything but certain. Grappling with ambiguity, uncertainty and contradiction is exactly where the humanities excel, and exactly why A.I. can’t supplant the totality of medical care.

I admitted my patient to the hospital for his heart failure, but it turned out that his kidney function had significantly worsened, both likely precipitated by a wrenching family crisis that upended his eating habits. A.I. might have been able to offer a template for treating his heart failure and even suggest that dialysis was indicated, but it would have faltered in advising whether dialysis would improve or worsen the quality of life for this particular 86-year-old. And it would have been useless for navigating the family crisis that mattered far more in my patient’s life than either of his flagging kidneys.

A.I. can be a useful prop in the patient’s story, but the character study remains an indispensable part of accurate diagnosis and effective treatment. The moment a patient’s illness unfolds before us, after all, we doctors and nurses become characters in that story. For my patient, our joint story has gone on for almost 25 years — a miracle, frankly, in a rapacious health care system that is heroically unconducive to primary care. I’m glad to have A.I. as a tool, but I’m even more grateful to hear my patient’s faithful heartbeat, every single time I lay my stethoscope on his chest.

  1. Danielle Ofri

    Opinion guest essayist

    Writing about this topic made me think about when (and how) I use technology in medicine versus when I rely on my human skills. If you are a healthcare professional, I’d love to hear examples of how human observation or intuition clinched a medical encounter. Patients, I’d be interested to hear how you experience both technology and human skills when you are on the business end of the stethoscope.

Read all comments

Danielle Ofri, a primary care doctor at Bellevue Hospital, is the author of “When We Do Harm: A Doctor Confronts Medical Error” and the editor in chief of Bellevue Literary Review.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.