On 10 May 2024, three-week-old Sidra Aliabase died after being wrongly prescribed sodium acid phosphate instead of sodium chloride, “at approximately five times the recommended dose for a neonate of her size”.
This is what the coroner wrote in the newborn’s ‘Prevention of future deaths’ report1.
In court, the prescribing doctor confirmed they had chosen the wrong drug from the drop-down menu.
This is one tragic example of the risk of ‘look-alike sound-alike’ (LASA) medication errors in electronic prescribing systems.
In November 2025, a national patient safety alert was issued on the risk of harm from healthcare staff incorrectly recording patients’ penicillin allergies as penicillamine allergies, providing a further example2.
“This error can result in patients with known penicillin allergies being prescribed penicillin-based antibiotics, increasing the risk of a potentially fatal anaphylactic reaction,” it said.
The alert explained that the error could occur in different ways, including that penicillamine appears above penicillin in an alphabetical drop-down list, or an allergy page displaying drugs by drug name instead of group, meaning that penicillamine will be the only option that comes up when typing ‘penicill’.
Beginning in the 1990s, electronic prescribing and medicines administration (ePMA) systems have been rolled out across the NHS, particularly since NHS England’s push to go paperless — a target first set for 2018 and later changed to 2024 following delays3. This deadline was pushed back again to 2025, which the government said was owing to delays caused by the COVID-19 pandemic.
According to the UK government, the use of electronic prescribing can reduce medication errors by 30%4. However, the above examples show that there is potential for errors to occur, particularly in relation to LASA medicines, begging the question: have ePMA systems increased the risk of LASA errors?
Incident data
To answer this question, The Pharmaceutical Journal sent a Freedom of Information request to NHS England for data on patient safety incidents associated with LASA medicines between 2015 and 2025, to determine if there has been an increase in the number of incidents as ePMA systems have been rolled out to increasing numbers of NHS trusts in England (see Figure 1).
However, it was difficult to obtain data specifically on LASA incidents for multiple reasons. Two incident reporting systems have been running concurrently over the past few years, owing to a transition from the original ‘National reporting and learning system’ (NRLS), which ran from 2003 to 2024, and the ‘Learn from patient safety events’ (LFPSE) service, which was introduced in 2021. This created the potential issue of dual reporting during the crossover period (2021–2024), where both systems were running.
In a report on the transition, NHS England said: “Preliminary analysis suggests that most organisations transitioning to LFPSE continued to submit to NRLS after their transition date. Further analysis will be needed to examine the extent to which this was dual reporting of the same incidents, versus overlapping reporting of different incidents5.”
The way in which LASA incidents are reported further confounds the issues. There is not a specific category for LASA incidents; categories can include ‘wrong/transposed/omitted medicine label’ or ‘poor packaging or design of medicines that might lead to errors’. In addition, reports are often submitted in free text, making it hard to extract data, but also making it easier for staff to report non-LASA incidents.
Figure 1: Potential look-alike sound-like incidents between 2015 and 2025
From the available data, NHS England was able to identify drug pairs that make up the most reported errors (see Table).
Table: Most reported drug pairs involved in LASA errorsReplacing one type of error with another
While the available data may not show an increase in errors, it is possible that LASA errors in traditional systems have just been replaced with new LASA errors in electronic systems, suggests Bryony Dean Franklin, professor of medication safety at University College London, and director and executive lead pharmacist for research at the Centre for Medication Safety and Service Quality, Imperial College Healthcare NHS Trust.
“With paper prescribing, it was more likely that you would write in your head the right drug, but the handwriting would be so bad it would be read differently by the nurse or pharmacist,” she says.
“So, the ‘look-alike’ error would be perhaps at the dispensing or administration stage, rather than the prescribing stage, whereas now I think the error is perhaps more likely at the prescribing stage because it’s picking from a menu.”
It’s almost that we’ve taken away illegibility errors with one hand, and given drop-down menu errors with the other
Julia Scott, pharmacist and chief information officer at Dartford and Gravesham NHS Trust
Franklin says these errors may have balanced each other out, explaining why there may not be a difference in the data.
Julia Scott, a pharmacist and chief information officer at Dartford and Gravesham NHS Trust, echoes this. “It’s almost that we’ve taken away illegibility errors with one hand, and given drop-down menu errors with the other. So, I wouldn’t expect LASA errors to increase or decrease. I would expect them to change.”
A study comparing the prevalence and types of prescribing errors following the implementation of an ePMA system at a London teaching hospital with paper-based prescribing found that while errors involving incorrect doses and illegible or incomplete orders were less common with electronic prescribing; those involving duplication, omission, incorrect drug and incorrect formulation were more common6.
Mitigating errors
One of the tactics used to try and prevent LASA errors in paper-based prescribing is ‘tall-man lettering’, in which certain letters in drug names are capitalised to distinguish it from others (see Figure 2).
Figure 2: Example of tall-man lettering
Adapted from: Quentin Lohmeyer et al. BMJ Qual Saf 2023;32:26-33
“There is some evidence it helps, but it doesn’t mitigate risk completely. But can you implement tall man in your e-prescribing system to add an additional layer of prompting?” asks Scott.
She also suggests changing how drugs are grouped. “If we’re aware of commonly known LASA groups or pairs, can you add anything extra? For example — in terms of the sorting — such as forcing things out of alphabetical order if necessary, to take penicillamine and penicillin away from each other in a list.
“How do humans behave? If your drop-down menu list is too long, is there a tendency to go for the first thing that looks around about right? So how do we think about truncation and things like minimum character sets — how many letters do I have to type before it will offer me a drop-down list?” she explains.
“In something like the gabapentin/pregabalin combo — which is a classic LASA pair — if I only have to type ‘gaba’, I’m going to get offered both. If I have to type five letters and put ‘gabap’, I’ve immediately lost the LASA error. So, there are things you can build in that will narrow that drop-down menu list and just make it safer, but this needs to be balanced against usability.”
Integrating AI
Scott also hopes that the integration of clinical decision support AI could help prevent LASA errors. “If you’ve got ePMA ideally integrated with your electronic patient record, can you start having things like natural language processing coming in and applying logic, like ‘you were writing about a diagnosis of a chest infection, but you’ve selected penicillamine. I know those two things don’t go together’? You can start to bring some really sophisticated, clever prompts in.”
However, she also warns of a ‘flip side’ for AI in relation to errors from ambient voice technology (AVT), also known as ‘AI scribes’.
AVTs listen to conversations between patients and healthcare professionals, after which they generate a transcript and clinical summary of the consultation. These outputs can be transformed into clinical notes and used to populate health records, or generate letters for patients or referral letters to other clinicians.
In January 2026, NHS England urged clinicians to take advantage of the technologies.
“That’s going to bring a whole new category of significant sound-alike error risk, because it’s almost back to the days of verbal orders, when verbal orders can be really easily misheard,” Scott explains.
“[For example], the AI scribe has heard penicillin as penicillamine, or you’ve truncated your ‘cipro’, so it needs to know that you mean ciprofloxacin. So AVT is another layer where we’re gonna have to think very heavily about a brand-new LASA error mechanism that we don’t have at the moment.”
Scott goes on to question how this error could then be mitigated. “Are we going to have to have AI-enabled clinical decision support layered over AVT to retrospectively apply tall man highlighting and things like that to the transcript that you’re checking to help draw your lazy human eye to the verbal errors added to the transcript?
“Everything you introduce, there’ll be a new error mechanism,” she concludes. “It’s not about humans being fallible and making mistakes. Some of it is about system design, but a huge amount, for me, is about understanding those fundamental cognitive mechanisms that lead to errors and trying to plan ahead for those and think about how we can use design and engineering principles to mitigate the risk.”
Other methods
Franklin also points to other ways of reducing LASA errors. “There’s a system that we’ve been working on called ‘Touchdose’, which allows prescribing by indication, so it matches doses to indications, which then does a little bit more sensitive checking. It’s like making the prescriber link in with ‘what’s the clinical indication that we’re using this drug for?’”
Touchdose — developed by Imperial College London spinout Dosium and part funded by the then Royal Pharmaceutical Society, with the investment remaining with the Royal College of Pharmacy — is a clinical decision support system. It calculates the correct dosage of the drug for the individual patient and indication — including frequency, duration and route of administration. A patient’s details are pulled from their medical record and the system filters by characteristics, such as age and weight, to suggest only relevant treatment options. It then calculates the correct doses. The tool has primarily been designed for use in paediatrics and pulls data from the British National Formulary (BNF) and BNF for Children.
“Because it’s requiring the prescriber to go through a workflow that involves picking the indication, I think this would be less likely to have look-alike type errors, because it’s making you match things up,” Franklin says.
A study of use of the tool in a London teaching hospital showed that, when Touchdose was used, the overall prescribing error rate was 1.2%, compared with 7.1% in standard medication orders7.
Touchdose currently sits on top of the Cerner ePMA system, but Franklin says it is being developed to interface with others.
Under-reporting
Of course, with limited data currently available, it is difficult to know the true scale of LASA errors, particularly in relation to ePMA systems.
“We know that only about 1 in 100 prescribing errors and about 1 in 1,000 administration errors ends up reported as an incident report, so we’ve got such massive underreporting that the granularity of picking anything up is quite tricky,” says Franklin.
Reporting requires you to be aware that an error has occurred and be willing to report it
Bryony Dean Franklin, professor of medication safety at University College London, and director and executive lead pharmacist for research at the Centre for Medication Safety and Service Quality, Imperial College Healthcare NHS Trust
“Reporting requires you to be aware that an error has occurred and be willing to report it, have the time and energy to do so and be able to get on a computer on the ward … so we’ve got massive under reporting for many good reasons, not because people are trying to hide things under the carpet, just because of practicalities.”
Franklin is hopeful that AI will make a difference in analysing LASA error reporting. “You get thousands of these reports and they’re typically very varied in terms of quality – some of them are incredibly minimal in terms of the detail.
“I’ve done analysis of these where I’ve obtained the whole dataset from NHS England, and [spent] hours and hours of just reading these free-text fields and trying to work out what goes on.
“One of the things with the new LFPSE system is that I think there’s a hope that it will be more amenable to analysis, whether that’s with AI or something else.”
Scott agrees that AI could be the change and says that, in an ideal world, reporting would capture near-misses as well.
“They’re more powerful data than the actual incidents, and certainly the incidents that make it to harm. In my ideal world, there’s an ePMA system where you can make a mistake, and there’s a way of seeing ‘I nearly did that, silly me.’
“Can I press a button that captures the almost mistake? Because it knows what I’ve done. It knows where I’ve clicked. It knows the workflow I followed. Can I press a button that says capture a near-miss further down the line?”
“You should very easily be able to have a partially completed incident form, assuming an incident management is integrated with your ePMA system, that pulls across all that demographic stuff, so that you can then just fill out a narrative description of what happened. You’d use AI support so you can narrate what happened, and it’s going to parse it for you into a sensible natural language structure and extract all the coded concepts, pull the drug data from what you’ve said, or from the prescription that it can see, because this clever incident reporting system now has access to the ePMA system.
“In my ideal world, it just needs you to describe what’s gone wrong, and then the AI is clever enough to start doing a thematic analysis for you very quickly.”
She says that a barrier to reporting in the current system is that staff have to stop what they are doing and open the incident reporting system.
“I think [staff think] ‘I’ll report that later’, carry on with the day job, because they’re in the middle of patient care, and then that thought is gone. [They can have] the best intentions of reporting it, but it’s never going to happen if they are not able to report it in the moment.”
“Healthcare professionals all understand the benefits of incident reporting and analysis and so on. They do want to do it. They want feedback. They want evidence that it’s making a difference but it’s hard to make a difference if you don’t get reporting in [the right] volumes.”
Scott is hopeful that AI can vastly improve the current system.
“We obviously need to tackle known issues with AI, including environmental impact and ethical concerns, but if we can do that, there is such potential for it to enhance medication safety,” she explains.
“In time, there may be all sorts of new risks that we just cannot imagine at the moment; whole new error types that AI may create. But this should not prevent us from taking advantage of all the potential benefits, we just need investment in the skills and knowledge to do this safely.”
It seems clear that LASA errors are unlikely to ever be fully eliminated. However, NHS England is hopeful the LFPSE system will improve things.
Its website says: “[The LFPSE system] introduces improved capabilities for the analysis of patient safety events occurring across healthcare, and enables better use of the latest technology, such as machine learning, to create outputs that offer a greater depth of insight and learning that are more relevant to the current NHS environment”8.
Perhaps this system, coupled with the potential of AI, means ePMA systems may eventually help to reduce the rate of LASA errors.
-
1.
Prescribing error led to death of baby, says coroner. Pharmaceutical Journal. Published online 2026. doi:10.1211/pj.2026.1.396976
-
6.
Feather C, Appelbaum N, Le Geyt J, Jheeta S, Maconochie I, Franklin BD. Evaluating the impact of an indication-based, patient-specific prescribing tool on prescribing errors in paediatrics: a non-randomised, before-and-after study. bmjpo. 2025;9(1):e003662. doi:10.1136/bmjpo-2025-003662
-
7.
Franklin BD, Puaar S. What is the impact of introducing inpatient electronic prescribing on prescribing errors? A naturalistic stepped wedge study in an English teaching hospital. Health Informatics J. 2019;26(4):3152-3162. doi:10.1177/1460458219833112