When Claudia Zedda decided she wanted to run a half marathon, she turned to artificial intelligence.
She asked the AI bot ChatGPT to create a plan for her 21km run.
“I didn’t have a running coach. I didn’t have running background,” says the Sardinian living in Cork.
“I was very specific. I said: ‘I’m a complete beginner. Never ran in my life. I usually do weightlifting three or four times a week, and I want to stick to it, but I have a half marathon in six months. Can you create an achievable plan for me based on my goals?’”
The detailed plan laid out by the bot, powered by large language model (LLM), worked for the 26-year-old; she has since completed two half marathons, shaving 15 minutes off her personal best in Galway last October.
Zedda is one of the estimated 150-200 million daily users of ChatGPT, which handles more than a billion queries each day.
It is now the fifth most visited website in the world after Google, YouTube, Facebook and Instagram, according to web analytics firm Similiarweb.
Zedda says she uses ChatGPT to help her “clarify” her ideas.
“When you’re a very busy person and you have all these goals, you put a lot of pressure on yourself so sometimes using ChatGPT helps me clarify my ideas,” she says.
“I bounce them off a tool that I know is able to summarise what I’m thinking and show me which fears are blocking me from achieving my goals”.
She sees the technology as empowering and, if used correctly, as helping individuals “achieve things that you would never have thought possible”.
Sometimes, Zedda uses specific prompts so that the chatbot uses a professional tone, asking ChatGPT to speak to her “as a cognitive psychologist” would.
Other times she asks for it simply “to be nice”.
“I find it very helpful if there’s something that upsets me. Let’s say I get a message that I don’t like and I feel a bit triggered,” she says.
“Before I text back, I explain to ChatGPT how I’m feeling and how this message is triggering me, and I paste the answer that I was going to write down and I ask: ‘Does this feel reasonable?’ It helps me balance my answer.”
The AI bot has helped her “avoid a lot of arguments”, she adds, laughing.
Although Zedda finds it useful, she says that she would not rely on it as a mental health support, seeing it more as a “life coach” than a therapist.
“I went to therapy so I know what benefits a therapist can give me, and I know how important a therapist is.”
Claudia Zedda
Others use chatbots differently.
Their use as a replacement for therapists due to their accessibility is of increasing concern among mental health professionals in Ireland and abroad.
In the US, Maria and Matthew Raine have taken a lawsuit against OpenAI, the owner of ChatGPT, and its chief executive Sam Altman, alleging that the bot contributed to the suicide of their 16 year-old-son Adam.
In the legal action, the family included chat logs between Adam and ChatGPT in which he detailed his suicide thoughts. The couple argue that the chatbot validated his “most harmful and self-destructive thoughts”.
Nicola Byrne, chief executive of mental health charity Shine, says that in Ireland “it would be unrealistic to ignore the fact that people are already using these tools in their daily lives, including in relation to mental health”.
“Any responsible discussion about AI and mental health must be clear that these tools are not clinicians” she says.
She adds that we “must not allow technology to quietly replace properly resourced, human-led services”.
“AI tools, including large language models, are not healthcare and must never be positioned or used as a substitute for professional mental healthcare, clinical assessment or human support,” she says.
She points to the difference between what she describes as “low-risk, practical uses” and “uses that are clinically inappropriate or potentially harmful” and says that “used cautiously, and with a clear understanding of the limits of these tools, [ChatGPT] sits closer to information-seeking or self-management than treatment”.
Examples of what she considers to be appropriate uses are “organising routines, setting medication reminders, journaling or asking general questions after receiving a diagnosis from a qualified professional”.
“We have serious concerns about AI being used as a form of emotional support, therapy or meaning-making around symptoms. The risks are particularly acute for people living with psychosis or severe mental illness,” she says.
The acute risks posed to those living with psychosis are because “these systems cannot reality-test, exercise clinical judgment or safely challenge beliefs, and they can inadvertently reinforce distress or delusional thinking,” she says.
“That is not a theoretical risk.”
Byrne says she is aware that “people often turn to AI because it is accessible and non-judgmental, especially when services are stretched”.
“That reflects gaps in care, not the suitability of the technology,” she says.
The increased use of ChatGPT comes against the backdrop of the chatbot’s owner attempts to sell its virtues to the Government and encouraging greater use of its technology in a country where, it claimed, usage was lower than elsewhere.
In a meeting at Government Buildings last May, Sarah Friar, OpenAI’s chief financial officer, told Taoiseach Micheál Martin about developments in AI technologies in recent years and according to a note of the meeting: “There was a discussion on the opportunities and potential use-cases of generative Al, including in healthcare and education”.
[ ChatGPT strikes a sour note with German musiciansOpens in new window ]
During the meeting, details of which were disclosed under the Freedom of Information Act, Friar told Martin that ChatGPT use is lower in Ireland, noting that “28 per cent of the population in Ireland is using ChatGPT weekly, compared to 50 per cent in the best countries”, according to a note of the meeting.
[ How AI and social media contribute to ‘brain rot’Opens in new window ]
The increased personal use of ChatGPT
Brendan Kelly, professor of psychiatry at Trinity College Dublin, says that people will turn to AI bots if “there are service delays or deficiencies”.
The consultant psychiatrist says the increased use of ChatGPT for therapeutic reasons is a “a wake-up call for services – health services, social services and mental health services – to provide better and proper information in a very clear and responsive way”.
Prof Brendan Kelly says people will use ChatGPT and other LLMs when other services are delayed or deficient. Photograph: Nick Bradshaw
“We, as in the services, need to get better and prompter at responding to people with proper information, or else they will turn to less reliable sources,” he says.
“AI is very good at routine tasks, organisational tasks and rote tasks. There’s no harm in using it for that,” he says.
[ The chatbot will see you now: is this the future of Irish medicine?Opens in new window ]
“The problem is when the LLM starts to seduce you with its apparent certainty about the world, its reliability, its responsiveness and interest by tacking on a question at the end of an answer, asking: ‘Do you want more?’”
He warns people against developing a relationship with ChatGPT.
“A relationship with a LLM might be seductive on the surface, but it is ultimately profoundly unsatisfactory because it has no personality of its own,” he says.
“LLMs are psychologically powerful if we let them be,” the psychiatrist says, “but they are also formulaic and ultimately, they’re empty.”
“It is always necessary that the user has more knowledge than the LLM and can critically appraise what comes out,” he says.
Kelly believes there is “no amount of legislation or regulation that will make an AI therapy sufficiently responsible”, but he does believe there are regulations that can mitigate risks for vulnerable users.
“We can make it a requirement for AI models that offer anything approaching therapy to have in place not just guardrails, but very assertive protections for people who express thoughts of self-harm or suicide, for example,” he says.
“They are capable of doing a better job detecting suicidal and self-harm language, blocking instructions and providing proper signposting to crisis lines.
“The tech companies are capable of monitoring literally everything I look at on my laptop and targeting ads to me. They’re more than capable of doing a much better job with people who are distressed interacting with their tools.”
In response to comments from Prof Kelly and Byrne, an OpenAI spokesman acknowledged that “people sometimes turn to ChatGPT in sensitive moments”.
“Over the last few months, we’ve worked with mental health experts around the world and updated our models to help ChatGPT more reliably recognise signs of distress, respond with care and guide people toward real-world support,” the company said.
“We’ll continue to evolve ChatGPT’s responses with input from experts to make it as helpful and safe as possible.”
The spokesman added that OpenAI was conducting research into teaching the LLM how to recognise distress, de-escalate conversations and guide people towards professional care.
- Do you regularly use ChatGPT as an adviser, friend, therapist or a source of information? If so, we would like to hear from you. Please share your experience of using ChatGPT or other LLMs in the form below.