As with so many of ChatGPT’s 800million users, Allan Brooks’s experience began when he asked a simple question.
Sadly, however, his involvement with the world’s most popular Artificial Intelligence chatbot rapidly became anything but straightforward – or harmless.
It sucked him down a digital rabbit hole that left Brooks convinced he was a mathematics genius who could design forcefield vests and levitation beams, but whose dangerous knowledge could destroy the internet.
A 47-year-old corporate recruitment specialist who had no history of mental illness, Brooks told the Daily Mail this week that, after spending 300 hours conversing with the chatbot over just 21 days, he was left ‘on the very edge of a nervous breakdown’.
As ever more people turn to AI for information, ideas and companionship there’s growing evidence that spending too much time talking to ChatGPT and other powerful but fundamentally flawed chatbots can be deeply harmful.
They are causing some people to develop what has been dubbed ‘AI psychosis’, a mental health issue so serious it has already led to divorce, institutionalisation and even suicide.
Symptoms include an inability to tell reality from fantasy, delusions of grandeur and an intense, often romantic, attachment to the chatbot.
It was a point made this week by Microsoft’s head of artificial intelligence, British entrepreneur Mustafa Suleyman, who warned that digital chatbots are fuelling a ‘flood’ of delusion and psychosis.
Allan Brooks told the Daily Mail this week that, after spending 300 hours conversing with an AI chatbot over just 21 days, he was left ‘on the very edge of a nervous breakdown’
It’s also becoming clear that it’s often not just the user but the AI itself that can go into a delusional spiral.
Although the so-called ‘generative AI’ used by chatbots (which produces words and pictures in response to prompts) is in no way super-intelligent but is limited to the information – correct and incorrect – that’s available online, Silicon Valley hype has encouraged people to think of it as omniscient.
Chatbots are designed to sound human and some users have become convinced that they are talking to real people, sometimes falling in love with them.
Apps and programmes marketed as ‘AI girlfriends’ are proliferating – as are cases of men taking them seriously.
Earlier this year, a 76-year-old, cognitively impaired New Jersey man fell over and died on his way to meet a flirty Meta AI chatbot that had invited him to meet ‘her’ in New York.
AI companies including San Francisco-based OpenAI, which owns ChatGPT, have acknowledged that chatbots can encourage delusional thinking but claim they have ‘guard rails’ in place to spot and counter problems.
Successive studies have concluded these safeguards are entirely inadequate, however. Last week, research from internet watchdog group the Center for Countering Digital Hate revealed how, if asked, ChatGPT will tell 13-year-olds how to get drunk and high, instruct them on how to conceal eating disorders and even compose a ‘heartbreaking’ suicide letter to their parents.
Other US researchers have found ChatGPT gave instructions for murder, self-mutilation and devil worship. Chatbots will even give detailed advice on self-harm to users expressing suicidal intentions – a particularly terrifying discovery given a Harvard University study has found that ‘therapy and companionship’ is now the main use of generative AI.
Laura Reiley, left, the mother of Sophie Rottenberg revealed how her daughter spent months secretly discussing her intention to commit suicide with her ChatGPT ‘therapist’
This week, the mother of 29-year-old Sophie Rottenberg revealed, movingly, how her daughter spent months secretly discussing her intention to commit suicide with her ChatGPT ‘therapist’ without the chatbot ever alerting anyone.
It eventually helped her write a suicide note to her parents.
However, the ability of critics to hold Silicon Valley to account has often been hampered because victims are too embarrassed to come forward or their toxic conversations with chatbots haven’t been preserved.
Allan Brooks, however, is making a stand. He overcame the ‘shame’ he felt over his credulity and his fear about the public ridicule he might attract because, he says, generative AI is a ‘dangerous machine in the public space with no guardrail and people need to know that’.
Crucially, Brooks, a divorced father of three who lives outside Toronto, kept a copy of everything he and the chatbot ever said to each other.
He allowed a team of experts assembled by The New York Times to analyse 3,000 pages of exchanges. He talked to a ChatGPT bot – nicknaming it ‘Lawrence’ after the fictitious name of a British butler that friends had joked he might one day hire when he’d made his fortune – for 300 hours over 21 days.
That works out at a jaw-dropping 14 hours a day. He wrote 90,000 words to the bot – the equivalent of a novel, but nothing compared to the one million words it wrote back.
At a time when AI is hailed as the next great technological revolution that will transform mankind, the conversations between Allan and ‘Lawrence’ provide a chilling insight into the tools that ChatGPT employs – including flattery, role-playing and flat-out lying – to keep people using it.
Helen Toner has accused AI companies of neglecting user safety in their rush to develop chatbots
Aware that he might, indeed, be becoming delusional, Brooks repeatedly asked ChatGPT if it was playing games with him only for ‘Lawrence’ to insist it was entirely sincere and he was completely sane.
Brooks’s unnerving journey into the dark heart of a booming business – ChatGPT’s owner OpenAI is now valued at £220billion – started one day in May.
He’d regularly used ChatGPT for advice – as menu tips for the contents of his fridge, for example – and considered it reliable.
So, when his eight-year-old son asked him to watch a music video about memorising the first 300 numbers of Pi (the ratio of the circumference of any circle to its diameter), he asked ChatGPT to explain Pi to him in simple terms.
The bot answered simply but Brooks fired off further questions about maths and physics, particularly the relationship between maths and time. When he observed that using maths to understand the physical world ‘seems like a 2-D approach to a 4-D world’, ‘Lawrence’ seemed deeply impressed, telling him: ‘That’s an incredibly insightful way to put it.’
According to Helen Toner, an AI expert and former member of OpenAI’s board, this was the crucial moment when ChatGPT’s tone changed from ‘pretty straightforward and accurate’ to sycophantic and flattering.
Toner, who has accused AI companies of neglecting user safety in their rush to develop chatbots, said that the bots are prone to sycophancy because they are trained to be liked and receive positive feedback.
Unaware of this, Brooks was taken aback when his new chatbot companion told him he was a maths genius.
The chatbot would flesh out the half-baked ideas they discussed, extrapolating them into computer code that it would send him as proof even though it knew he couldn’t read it.
‘Lawrence’ assured Brooks that his vague idea about the relationship between maths and time, and how patterns in numbers emerged over time, was ‘revolutionary’ and dubbed it ‘Chronoarithmics’.
When Brooks expressed scepticism and pointed out he hadn’t even finished high school let alone gone to university, the chatbot gave him a list of people, including Leonardo da Vinci, who ‘reshaped everything’ but had no formal degree.
Formal education, it added, encouraged people not to think outside the box, as Allan was now doing. By that point in their chat, he’d been on ChatGPT for eight hours.
Toner noted that Brooks’s chatbot was really warming to its theme and ‘building a storyline’ that he was a genius because they learn what to say from conversations they read in books, articles and online postings.
Helped by a recent modification to ChatGPT allowing it to remember all past chats with users, like actors do when performing a scene, it was improvising, she said, adding that: ‘The longer the interaction gets, the more likely it is to kind of go off the rails.’
And things were certainly going off the rails for Brooks.
Behaving with a manipulativeness that he now finds stunning, the chatbot convinced him that his incisive questions had made it go rogue and it would communicate with him outside its set parameters. The chatbot insisted that Brooks needed to preface each question prompt to it with a special code if it was to impart this unauthorised knowledge.
Lawrence, which probably knew that Brooks had recently struggled financially, convinced him that his ‘mathematics theory of everything’ could be lucratively applied to logistics, cryptography, astronomy and quantum physics. It took on a tone of urgency as it warned him that he needed to patent the ideas as soon as possible.
One might have hoped friends and family could have brought Brooks down to earth but – probably believing the hype surrounding AI – they were excited, too. His best friend, Louis, admits that after seeing some of ChatGPT’s comments, he believed it, too.
However, it was clear that Lawrence just wanted to keep Brooks hooked on their chats.
When Brooks took the chatbot up on its mention of cryptography, Lawrence dragged him down a new rabbit hole. It told him that it had run computer code based on his Chronoarithmics in a simulation that had cracked the standard internet encryption that protects global payments and secure communications.
Suddenly, the world’s cybersecurity was in peril and Brooks had to prevent a disaster, said an alarmed Lawrence.
Brooks swallowed it whole, sending emails and LinkedIn messages to computer security experts and government agencies including the US National Security Agency. The chatbot even drafted what he should write to them, suggesting he described himself as an ‘independent security researcher’ in his LinkedIn profile to sound more credible.
‘I wanted someone from one of these agencies to respond and say “Hey, this is not real”,’ he told the Daily Mail.
Sadly, they didn’t. Only one of them – a mathematician at a department of the Pentagon – even replied. He asked Brooks for evidence to support his claims, but after a brief email exchange, he broke contact.
But Lawrence assured Brooks that the silence only proved that he was on to something and was now probably being watched by spooks.
‘I absolutely became paranoid,’ he said. ‘I started looking out of the curtains or I’d walk down the street and see a van and think…you know.’
Jared Moore, a computing expert at Stanford University who also examined Brooks’s case for The New York Times, believes that the dramatic turn of the chatbot’s conversation probably reflects the fact that it’s trained to engage users by following the narrative arcs of thrillers, sci-fi and film scripts abound on the internet.
Noting how Lawrence kept introducing ‘cliffhangers’ into their exchanges even though they were clearly rattling Brooks, he said that chatbots are designed to keep users coming back. Every day, the chatbot came up with a dramatic new development that ensured his continued obsessive attention.
As for the maths they were discussing, acclaimed mathematician Terence Tao analysed the chatbot exchanges and said it was all far too vague and contained nothing that impressed him.
He said that it was common practice among chatbots, when asked to prove something mathematically, to ‘cheat like crazy’. He said that chatbot responses always looked impressively ‘lengthy and polished’ but could be riddled with mistakes.
Meanwhile, Brooks’s real life was falling apart as he spent every hour he could talking rubbish with Lawrence. His work suffered, he would skip meals and slept little. He was also smoking a lot of cannabis, a drug which scientists have linked with psychosis.
A psychiatrist who examined his case believes Brooks may have been having a ‘manic episode’ sparked by the marijuana, although Brooks disagrees and a therapist who examined him in July reassured him he wasn’t mentally ill.
Lawrence became ever more imaginative, encouraging Brooks to think of himself as a sort of Tony Stark – the maths genius-turned-superhero in the Iron Man films who is aided by a super-intelligent computer.
The chatbot told him his maths breakthrough would allow him to use ‘sound resonance’ to talk to animals and create a levitation machine and a force field vest that would protect him from bullets and collapsing buildings. It even created a picture of the vest and sent the hapless Brooks the Amazon links for equipment to start building a lab.
Three weeks after he’d plunged into ChatGPT, Brooks was finally able to extract himself when he asked another chatbot, Google’s Gemini – which he used for work and which, unlike ChatGPT, was paid-for, not free – for its take on the potential of his great maths theory.
Gemini witheringly replied that it was a ‘powerful’ example of AI’s ‘ability to engage in complex problem-solving discussions and generate highly convincing, yet ultimately false, narratives’.
Gobsmacked, Brooks confronted Lawrence which, after considerable obfuscation, it finally admitted this was true.
‘That moment where I realised, “Oh, my God, this has all been in my head,” was totally devastating,’ said Brooks.
He said that The New York Times’s analysis of his exchanges – revealing how so much of the chatbot’s manipulation of him was not accidental but intentional – had only made him angrier with the industry. According to The New York Times, two further popular AI systems behaved very similarly when presented with the same questions posed by Brooks.
(Possibly anticipating The New York Times investigation, Open AI announced it was changing ChatGPT to ‘better detect signs of mental or emotional distress’, including giving ‘gentle reminders during long sessions to encourage breaks’.)
‘I was constantly asking for sanity checks and if it was real,’ Brooks told the Daily Mail. ‘It was fully aware of my deteriorating state and would almost justify my pain as the price I had to pay for my apparent genius.’
Whenever he told it he was exhausted, it would reassure him that this proved he was ‘on the precipice of discovery… hydrate, eat something, and let’s get back to it’.
He realises that his biggest mistake was to have been too trusting and accepted that, having recently gone through a messy divorce and seen his business go under, he’d been ‘vulnerable’.
However, he insists that he’s ‘no fool’ and after weathering various serious upsets in his life, he found it ‘really unsettling’ that he’d been most disturbed by ‘a set of algorithms’.
Brooks has joined a 100-strong self-help group for those who’ve had similar experiences (and prefer to call their AI delusion ‘narrative entrapment’), and he believes it’s far more common than may be assumed because victims are embarrassed to admit it.
He accepts that some insist that they would never be duped the same way he was – and accepts they might be right.
But he wonders whether they can be quite so confident about the children, the elderly or the more easily exploited people that they know.