The computer interrupted while Pamela was still speaking. I had accompanied her – my dear friend – to a recent doctor’s appointment. She is in her 70s, lives alone while navigating multiple chronic health issues, and has been getting short of breath climbing the front stairs to her apartment. In the exam room, she spoke slowly and self-consciously, the way people often do when they are trying to describe their bodies and anxieties to strangers. Midway through her description of how she had been feeling, the doctor clicked his mouse and a block of text began to bloom across the computer monitor.
The clinic had adopted an artificial-intelligence scribe, and it was transcribing and summarizing the conversation in real time. It was also highlighting keywords, suggesting diagnostic possibilities and providing billing codes. The doctor, apparently satisfied that his computer had captured an adequate description of Pamela’s chief complaint and symptoms, turned away from us and began reviewing the text on the screen as Pamela kept speaking.
When the appointment was over, as a physician myself and anthropologist interested in the evolving culture of medicine, I asked if I could glance at the AI-generated note. The summary was surprisingly fluid and accurate. But it did not capture the catch in Pamela’s voice when she mentioned the stairs, the flicker of fear when she implied that she now avoided them and avoided going out, the unspoken connection to Pamela’s traumatic relation to her own mother’s death that the doctor never elicited.
Soon, not using AI to help determine diagnoses or treatments could be seen as malpractice
Craig Spencer
Scenes like this are becoming increasingly common. Physicians, for generations, have resisted new technologies that threatened their authority or unsettled established practice. But artificial intelligence is breaking that tradition by sweeping into clinical practice faster than almost any tool before it. Two-thirds of American physicians – a 78% jump from the year prior – and 86% of health systems used artificial intelligence as part of their practice in 2024. “AI will be as common in healthcare as the stethoscope,” predicts Dr Robert Pearl, the former CEO of Permanente Medical Group, one of the largest physician groups in the country. As my colleague Craig Spencer has observed: “Soon, not using AI to help determine diagnoses or treatments could be seen as malpractice.”
Policymakers and aligned business interests promise AI will solve physician burnout, lower healthcare costs and expand access. Entrepreneurs tout it as the great equalizer, bringing high-quality care to people excluded from existing systems. Hospital and physician leaders such as Dr Eric Topol have hailed AI as the means by which humanity will finally be restored to clinical practice; according to this widely embraced argument, it will liberate doctors from documentation drudgery and allow them to finally turn away from their computer screens and look patients in the eye. Meanwhile, patients are already making use of AI chatbots as supplements to – or substitutes for – doctors in what many see as a democratization of medical knowledge.
The problem is that when it is installed in a health sector that prizes efficiency, surveillance and profit extraction, AI becomes not a tool for care and community but simply another instrument for commodifying human life.
It is true that large language models can churn through mountains of medical literature, generate tidy summaries, and even outperform human physicians on diagnostic reasoning tasks in some studies. Last month, a new artificial intelligence system from OpenEvidence became the first AI to score 100% on the United States Medical Licensing Exam. Research suggests AI can read radiologic images with accuracy rivaling human specialists, detect skin cancers from smartphone photos, and flag early signs of sepsis in hospitalized patients faster than clinical teams. During the Covid-19 pandemic, AI models were deployed to predict surges and allocate scarce resources, fueling hopes that similar systems could optimize everything from ICU beds to medication supply chains.
What makes AI so compelling is not simply faith in technology but the way it suggests we can improve medicine by leapfrogging the difficult work of structural change to confront disease-causing inequality, corporate interests and oligarchic power.
The US is the most medicalized country on Earth. Incentivized by profit, it spends roughly twice as much per capita on healthcare as other high-income nations, while simultaneously excluding millions from it and suffering – across all income levels – from far higher rates of preventable disease, disability and death. At the same time, public health scholars have long argued that medicine alone cannot fix what ails us at a population level. Instead, much more attention and public investment must be directed towards non-medical social care that is essential for preventing disease, lowering preventable healthcare needs and costs, and enabling medical interventions to be effective.
For many, tackling the perversity of American healthcare feels out of reach as the US lurches ever further into authoritarianism. In this context, AI is offered as a balm not because it addresses root causes of abysmal US public health, but because it allows policymakers and corporations to gloss over them.
This faith in AI also reflects a misunderstanding of care itself, a misunderstanding decades in the making in the service of an idea now treated as an unquestionable good: evidence-based medicine (EBM). Emerging in the 1990s with the unassailable goal of improving care, EBM challenged practices based on habit and tradition by insisting decisions be grounded in rigorous research, ideally randomized controlled trials. First championed at McMaster University by physicians David Sackett and Gordon Guyatt, EBM quickly hardened into orthodoxy, embedded in curricula, accreditation standards and performance metrics that reshaped clinical judgment into compliance with statistical averages and confidence intervals. The gains were real: effective therapies spread faster, outdated ones were abandoned, and an ethic of scientific accountability took hold.
But as the model transformed medicine, it narrowed the scope of clinical encounters. The messy, relational and interpretive dimensions of care – the ways physicians listen, intuit and elicit what patients may not initially say – were increasingly seen as secondary to standardized protocols. Doctors came to treat not singular people but data points. Under pressure for efficiency, EBM ossified into an ideology: “best practices” became whatever could be measured, tabulated and reimbursed. The complexity of patients’ lives was crowded out by metrics, checkboxes and algorithms. What began as a corrective to medicine’s biases paved the way for a new myopia: the conviction that medicine can and should be reduced to numbers.
The loss of the unsaid
The uses of AI do not just affect how we listen but also how we think and speak about ourselves, particularly as patients. Not long ago, a young woman came to see me, in my capacity as a psychiatrist, for chronic fatigue and associated experiences of sadness, anxiety and loss. She had experienced many rounds of dismissal by other doctors to whom she had appealed for help. Along the way, she developed strategies to try to avoid the experiences of humiliation she associated with doctors’ offices. One of those strategies: using ChatGPT to refine her narrative of herself. In the week leading up to her appointment with me, she had already told her story at least 10 times to the ChatGPT app she had installed on her phone. She had described her headaches, her racing heart in the early morning, the exhaustion that did not ease with rest. Each time, the bot responded in calm, fluent medical language, naming diagnostic possibilities and suggesting next steps. She refined her answers with each attempt, learning which words elicited which responses, as if she were studying for an exam.
When she spoke to me, she used the same phrasing ChatGPT had given back to her: precise, clinical, flattened language largely stripped of affect or reference to her personal history, relationships and desires. Her deep fears were now encased in borrowed phrases, translated into a format she thought I would recognize as legitimate medical concerns, take seriously and address.
It is true that her efforts made bureaucratic documentation easy. But much else was lost in the process. Her own uncertainty, distrust of her own self-perception and body, and her life history and idiosyncratic way of making sense of her suffering had been sanded away, leaving a smooth, ready-made medical discourse ready for transcription and transmission to pharmacists and insurance companies. In the hands of a clinician practicing EBM predicated on symptom scales, a reflexive prescription for antidepressants or stimulants – or a battery of tests for endocrine or autoimmune diseases – might have seemed the natural response.
But those interventions, while perhaps later appropriate, would have skated over the deeper social and personal roots of her exhaustion. My patient’s AI-distorted narrative of herself thus not only obscured her experience but it also risked directing her care down a path of algorithmic pseudo-fixes that carry considerable risk of unintended harm.
AI views both suffering and care as computational problems
This encounter has been replicated in various forms with several other patients I have met over the last year as AI tools have rapidly infiltrated everyday life. One man in his late 60s, retired after a lucrative business career, divorced, estranged from his two adult sons and enmeshed in an abusive relationship came to me struggling with profound loneliness, regret and severe alcohol dependence, punctuated by panic attacks so severe he feared he might die alone of a heart attack in his downtown high-rise condo. Before seeing me, he had spent weeks using ChatGPT as a therapist to great effect, he told me. He spent hours every day writing to it about his symptoms and past, and taking comfort in its consistently complimentary replies that assured him he had been wronged by others in his life.
ChatGPT had become not only his counselor but his main source of companionship. By the time we met, he fluently named his attachment style and the personality disorders ChatGPT had assigned to his family members, and repeated treatment suggestions – none of which addressed his daily consumption of a fifth of vodka. When I asked how he was feeling, he hesitated, then looked down at his phone in his hand as if to check whether his words matched what the psychological profile ChatGPT had laid out for him. The machine had substituted for both his voice and the human connection he craved.
We risk entering a perverse loop: machines are supplying the language with which patients relay their suffering, and doctors are using machines to record and respond to that suffering. This cultivates what psychologists call “cognitive miserliness”, or a tendency to default to the most readily available answer rather than engage in critical inquiry or self-reflection. By outsourcing thought, and ultimately the most intimate definitions of ourselves to AI, doctors and patients risk becoming yet further alienated from one another.
In this trajectory we can see the evolution of what Michel Foucault described in The Birth of the Clinic as the “medical gaze” – the separation and isolation of the diseased body from the lived experience of the person and their social environment. Where the 19th-century gaze fragmented the patient into lesions and signs visible to the clinician, and the late 20th-century evidence-based gaze translated patients into odds ratios and treatment protocols, the 21st-century algorithmic gaze dissolves both patient and doctor alike into never-ending streams of automated data. AI views both suffering and care as computational problems.
The arguments in support of this transformation of the clinic are familiar. Human physicians misdiagnose, while algorithms can catch subtle patterns invisible to the eye. Humans forget the latest science; algorithms can absorb every new article the instant it is published. Physicians burn out, but algorithms never tire. Such claims are true in a narrow sense. But the leap from these advantages to a wholesale embrace of AI as medicine’s future depends on dangerously simplistic assumptions.
The first is that AI is more objective than human physicians. In reality, AI is no less biased; it is merely biased differently, and in ways harder to detect. Models rely on existing datasets, which reflect decades of systemic inequities: from racial biases baked into kidney and lung-function tests to the underrepresentation of women and minorities in clinical trials. Pulse oximeters, for example, systematically underestimate hypoxemia in people with darker skin tones; during the Covid pandemic, these errors fed into triage algorithms, delaying care for Black patients. Race-based corrections for kidney function long influenced transplant eligibility across the country. Once such biases are embedded in protocols, they persist for years.
These problems are compounded by the assumption that more data automatically translates to better care. But no amount of data will repair underfunded clinics, reverse physician shortages or protect patients from predatory insurers.
AI threatens to deepen these problems, obscuring discriminatory and profit-driven policies behind a sheen of computational neutrality. Emerging AI tools are ultimately controlled by the billionaires and corporations who own them, set their incentives, and determine their uses. And as has become increasingly apparent in Trump’s second term, many of these AI scions – Elon Musk, Peter Thiel and others – are open eugenicists guided by prejudice against gender and racial minorities and disabled people. What is emerging is a form of technofascism, as this administration helps a small cadre of allied tech magnates consolidate control over the nation’s data – and with it, the power to surveil and discipline entire populations.
AI tools are perfectly suited to their project of authoritarian surveillance, which the Trump administration is actively advancing through its “AI action plan”. By stripping away regulatory guardrails and granting tech firms free rein so long as they align with the administration’s ideology, the plan hands unprecedented power to corporations already steeped in eugenicist thinking. Beneath the rhetoric of innovation lies a simple, sobering fact: AI can only function by vacuuming up enormous troves of human data – data about our bodies, pain, behaviors, moods, anxieties, phobias, diets, substance use, sleep patterns, relationships, work routines, sexual practices, traumatic experiences, childhood memories, disabilities and life expectancy. This means that each step toward AI-driven medicine is also a step toward deeper, more opaque forms of data capture, surveillance and social control.
Already, health insurance companies have used AI-driven “predictive analytics” to flag patients as too costly, quietly downgrading their care or denying coverage outright. UnitedHealth rejected rehabilitation claims for elderly patients deemed unlikely to recover quickly enough, while Cigna used automated review systems to deny thousands of claims in seconds, with no physician ever even reading them.
The result of all this efficiency is not more presence in caregiving, but less
Another key assumption behind AI optimism is that it will free physicians to devote more time and attention to patients. Over the last several decades, countless technological advances in medicine – from electronic health records to billing automation – have been sold as a way to lighten the clinician’s load. Indeed, controlled trials in which AI scribes wrote patient notes for doctors resulted in time savings and improved satisfaction with their workdays – but only in controlled experiments in which time savings are not accompanied by increased productivity expectations.
The real world of the US healthcare system doesn’t generally work that way. Each technological advancement that could free up physician time has instead tightened the productivity ratchet: these “efficiency gains” have simply been used to squeeze more visits, more billing, and more profit out of every hour. In other words, whatever time and energy technology saves, the system immediately recaptures them to maximize profit. With private equity gobbling up healthcare facilities at an alarming rate, there is little reason to think the medical industry’s uses of AI will be different.
The result of all this efficiency is not more presence in caregiving, but less. Already, patients’ most common complaint is that their doctors do not listen to them. They describe being treated as bundles of symptoms and lab values rather than as whole people. Good clinicians know what matters most in interactions with patients are the hesitations, silences and nervous laughs – the things left unsaid. These cannot be reduced to data points. They require presence, patience and attunement to the affective states, social relationships, family dynamics and fears of each patient. This is obviously true in the case of mental healthcare, but it is no less true in internal medicine, oncology or surgery, where patients are appealing for care in what are often the most vulnerable moments of their lives – moments in which a physician responds not just as a technician but as a person.
AI, by contrast, is built to erase silence and isolate the patient as a calculable organism. It cannot recognize that a patient’s first version of their story is often not their real one – not the one that is troubling them most. Moreover, studies of doctors’ reliance on AI underline that it frequently causes rapid clinical deskilling: when algorithms propose diagnoses or management plans, physicians’ reasoning skills atrophy, leaving them more dependent on machines and less capable of independent judgment. Rather than correcting human fallibility, AI seems more likely to amplify it by training clinicians out of their capacity to listen and think critically, collaboratively and creatively.
Reclaiming care
If the danger of AI medicine is forgetting what genuine care entails, then we must collectively recall the foundation of caregiving that has been obscured under US health capitalism. Care is not about diagnoses or prescriptions. It relies on something more fundamental: the provision of support to the other alongside the cultivation of an inner experience of concern toward others.
This kind of care is inseparable from politics and the possibility of community. As philosophers from Socrates to Søren Kierkegaard and feminist theorists like Carol Gilligan and Joan Tronto have long argued, care is not only a clinical task but an ethical and political practice. It is, in the deepest sense, a practice of disalienation – of recovering our sense of ourselves as singular beings in community with one another in which individual difference is valued rather than erased.
That is why care has transformative power beyond health. To be truly listened to – to be recognized not as a case but as a person – can change not just how one experiences illness, but how one experiences oneself and the world. It can foster the capacity to care for others across differences, to resist hatred and violence, to build the fragile social ties upon which democracy depends.
By contrast, when medicine is reduced to data and transactions, it not only fails patients and demoralizes doctors. It also degrades democracy itself. A system that alienates people in their moments of deepest vulnerability – bankrupting them, gaslighting them, leaving them unheard – breeds despair and rage. It creates the conditions in which authoritarians gain traction.
True care is not a transaction to be optimized; it is a practice and a relationship to be protected
In this light, the rush to automate care is not politically neutral. To hollow out medicine’s capacity for presence and recognition is to hollow out one of the last civic institutions through which people might feel themselves to matter to another human being – to suffocate the very basis of society itself.
Perhaps the most dangerous assumption behind the rise of AI in medicine is that its current trajectory and private ownership structure is inevitable. When we refuse this narrative of inevitability, we can finally recognize that the real alternative to our present is political, not technological. It requires investing in the caregiving workforce, strengthening publicly owned systems for both medical and social care, expanding the welfare state to combat growing inequality, and creating conditions for clinicians to care for patients as people, not data.
Technology, including AI, need not be inherently dehumanizing or alienating. In a national health system oriented toward genuine care, AI in medicine could help track medication safety, identify the most vulnerable individuals for intensive social and financial support, prioritize remedying inequities, or support overburdened clinicians and support staff without monetizing their every move. But such uses depend on a political economy premised on care, not extraction and the endless commodification of human life – one that values human diversity, collective flourishing and supporting each individual’s unique life potential over data capture, standardization and profit.
If AI in service of corporate imperatives becomes medicine’s guiding force, these dimensions will not merely be neglected. They will be actively erased, recoded as inefficiencies, written out of what counts as care.
To resist AI optimism is often cast as anti-progress or naive luddism. But progress worth pursuing requires refusing the illusion that faster, cheaper and more standardized is the same as better. True care is not a transaction to be optimized; it is a practice and a relationship to be protected – the fragile work of listening, presence and trust. If we surrender care to algorithms, we will lose not only the art of medicine but also the human connection and solidarity we need to reclaim our lives from those who would reduce them to data and profit.
-
Eric Reinhart is a political anthropologist, psychiatrist and psychoanalyst
-
Spot illustrations by Georgette Smith