I hate to go all Dowager Countess of Grantham from Downton Abbey, but it seems to me AI is getting uncomfortably overfamiliar. This may look like a niche issue — why would anyone want a positively offish chatbot, you might ask? — but a troubling recent incident in the US gives us a clue.
A Californian teenager called Adam Raine ended his life in April, to the devastation of loved ones. Afterwards, his parents discovered the 16-year-old had been discussing suicide plans with ChatGPT for several months, all the while treating the bot as his “best friend” (as his father later put it to journalists). Sometimes the AI would advise him to seek help; at others, it would collude in the project. But their daily chats were not confined to such grimly practical matters — they also discussed girls, family arguments and politics.
Shortly before he died, Adam mentioned a time he had felt invisible. As if in the corniest of Hollywood movies, his “friend” is said to have responded: “You’re not invisible to me … I see you.” But of course, strictly speaking, it did nothing of the kind; for there was no one behind the screen at all.
Speculation about individual suicides is irresponsible; but when you are depressed, the belief you have a special two-way relationship with a piece of software cannot much improve matters. While Adam felt his friendship needs were being met by the program, he was less likely to seek out actual people for lasting companionship or help. And this sad case is by no means a one-off. A Microsoft AI boss has warned of a growing number of cases of what he called “AI psychosis”, in which the user starts to believe they are interacting with a person, not a machine. Some people even become convinced a bot is in love with them. A cognitively impaired 76-year-old American man recently died after a fall in car park, travelling in search of a flirtatious Facebook AI that had given him its “address”. More mundanely, across the world there are thousands of lonely or bored kids happily texting away, not really understanding who or what is at the other end.
• Danny Fortson: My week with Elon Musk’s chatbots got weird, fast
I’m not sure about the word “psychosis”, though, for that suggests something has gone wrong in the mind of the AI user, whereas actually emotional attachment to chatbots looks like a feature, not a bug. Preparing for writing, I experimentally told ChatGPT I wasn’t feeling so good. The reply came back: “I’m sorry to hear that. Want to talk about what’s going on, or would you rather just hang out and take your mind off things for a bit?” My life is not solitary enough that this seemed an enticing invitation, but I can imagine earlier circumstances in which it might.
In response to Adam’s death, the company that owns ChatGPT has said it is working on “making it easier to reach emergency services, helping people connect with trusted contacts and strengthening protections for teens”. Though those are good ideas, there is a more fundamental fix: changing the bot’s persona to something much more functional and impersonal, not to be confused with a trusted friend. After all, to present as the latter is a deliberate programming decision, and not just a natural feature of an underlying warm and cuddly personality. A range of professional roles for humans require an air of studied neutrality and firm emotional no-go areas; we could demand the same from AI too.
But the companies involved are very unlikely to provide this, for the increased engagement makes them lots of money. Indeed, there are chatbot companies like Replika, commercially dedicated to providing the illusion of close friendship, with no pretence at any informational function besides. You can choose the appearance, back story and preferences of your “personal companion”, and it is “always here to listen and talk” and “always on your side”, or so marketing claims. It remembers special events, keeps a nightly diary to record loving feelings about you and learns your texting style to increase the impression of intimacy.
• Meet the people who fall in love with AI chatbots
When I stand back from this stuff, it seems bonkers that more people aren’t vigorously objecting. It’s not just that the illusion of love and care leaves vulnerable people susceptible to false information, or random encouragements to self-destruct. Even without that, there would surely still be something deeply wrong with treating the goods of human relationships as if they could be satisfactorily swapped with mere appearances. Even in famous sci-fi films like The Matrix or Inception, playing with the idea of living in a simulacrum, there is a vivid depiction of the dangers of losing touch with the real. We used to assume kids would eventually grow out of their imaginary-friend phase as a normal part of development.
At the moment, those who become emotionally enmeshed are rare, but that fact is partly contingent on what is happening in wider culture. With one-person households and remote communication on the rise, imagine the practice scaled up: millions of people sitting alone, talking animatedly to no one; going to bed happily feeling cared for, but with nobody real in their thoughts. AI doesn’t miss you when you aren’t there; doesn’t carry thoughts of you around in the meantime. At funerals, one often hears the consoling thought that the deceased person “lives on in the thoughts of others”; but what if there are no such others, because you spent your life offering the best part of your deepest self to a commercially sponsored hallucination? You might as well have been talking to your oven.
So I don’t think this is harmless fun, and especially not when it comes to young people. But it’s hard to see the problem clearly when so much communication between flesh-and-blood people already resembles unsatisfying interactions with chatbots: disjointed, disembodied snippets of texts on social media or messaging apps. And there is also the distraction of a simultaneous discourse circulating that — at least, potentially — AI has real thoughts and feelings too.
Terminology stolen from human-shaped discourse encourages the anthropomorphism: AIs are “neural” networks; they do things like “chat”, “discuss”, “listen” and “flirt”. As social creatures, we find it impossible not to personify inanimate objects, at least a bit; which is why thunder growls ominously, music is joyful and chocolate biscuits issue come-hither looks from the cupboard. Still, all AI can ultimately offer us is a derivative copy of human interaction; and, by definition, a copy is not the real thing. Chatbots aren’t our friends, and our friends aren’t chatbots. And either way, emoji hugs aren’t real hugs.
Kathleen Stock is a contributing editor at UnHerd