
Via Getty Images
Bots can confidently deliver entirely false information.
As artificial intelligence is integrated more into our daily lives, a Limerick pharmacist is urging Limerick residents to treat AI-generated medical advice with extreme caution.
Rebecca Barry, pharmacist at Barry’s Care Plus Pharmacy in Adare, has raised concerns about the rising popularity of tools like ChatGPT and Gemini.
She warns that the technology is not yet clever enough in human terms to replace professional medical consultation.
While a change can already be seen in patient behaviour over the last few years, with many using Google to look up their symptoms, AI has taken it a step further.
Speaking on Live 95’s Limerick Today, Rebecca said, “It’s basically taking all of those websites and, instead of presenting you with a list of websites, it’s going through all of the relevant info in them and digesting it and giving it back to you in a way that’s really easy for us to consume.”
Rebecca says misleading information can be a huge concern with AI, particularly ‘AI hallucinations’ where bots provide confidently delivered but entirely false information.
“A lot of people, when they’ve asked AI questions about taking minerals, as in mineral supplements, it was saying to eat rocks,” she said.
“There was a case where a patient put into Google AI what to do for a kidney stone, and it told them to drink urine. Obviously, that wasn’t correct. But who’s to tell them that that wasn’t correct? Only their health care professional. So what seemed to happen here is the system was interpreting all the guidance that’s out there.”
“So in that specific example, you’d be advised to drink lots of fluids and assess your urine to make sure it’s clear. But the AI bot was trying to combine all those different pieces of info. And it came out with a sentence that sort of seemed coherent, but it wasn’t the correct guidance at the end of the day.”
AI’s most critical flaw is that it doesn’t know what questions to ask the patient.
Beyond accuracy, Rebecca also raised concerns regarding data security, saying, “Health care data is sensitive data, and everybody needs to realise that if you’re entering personal medical data into an open-source AI, that is your own personal private data that you are now sharing.”
Despite these flaws, the future of AI in healthcare does look promising, provided it stays in professional hands.
Rebecca predicts that within the next few years, AI will be used by hospitals to analyse scans with greater accuracy than humans and help doctors summarise clinical trials from reputable sources.
For now, the advice remains simple: use technology for information, but use a professional for a diagnosis.
“I would definitely caution you to confirm everything with a healthcare professional. Don’t take it at face value always.”