{"id":247875,"date":"2025-07-08T11:32:29","date_gmt":"2025-07-08T11:32:29","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/247875\/"},"modified":"2025-07-08T11:32:29","modified_gmt":"2025-07-08T11:32:29","slug":"drs-spend-4x-more-time-on-paperwork-than-patients-uk-news","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/247875\/","title":{"rendered":"Drs spend 4x more time on paperwork than patients | UK | News"},"content":{"rendered":"<p>Health Editor\u00a0<\/p>\n<p><strong>\u00a0<\/strong><\/p>\n<p>A major new study has issued a stark warning about the use of AI chatbots to give medical advice, revealing that ordinary people using the latest \u201cdoctor bots\u201d could still fail to spot serious illnesses or make the right decisions \u2014 despite the bots themselves scoring top marks in medical exams.<\/p>\n<p>The news comes as the government announces greater use of AI in healthcare as part of its ten year plan.\u00a0<\/p>\n<p>In a world-first trial, 1,298 members of the public were asked to tackle ten common health scenarios &#8211; like chest pain or abdominal problems &#8211; using either a leading AI chatbot (such as ChatGPT&#8217;s GPT-4o) or traditional sources like Google or the NHS website.<\/p>\n<p>The bots, when tested alone, performed impressively &#8211; correctly identifying medical conditions 95% of the time. But when real people used them for help, the results were alarmingly poor. The diagnosis was only correct in just 34.5 percent of cases and most of the time &#8211; 56 percent of cases &#8211; an incorrect decision about what to do (such as going to A&amp;E or staying home) only was made.\u00a0<\/p>\n<p>The failure, say the researchers from major academic institutions, lies not with the bots&#8217; medical knowledge but with how they interact with humans. Users often gave the bots incomplete or vague information, while the bots, though technically correct, failed to clearly explain what to do next.<\/p>\n<p>The results raise serious concerns for the government and tech firms promoting AI in healthcare.<\/p>\n<p>Robert Dingwall, a professor in social sciences at Nottingham Trent University said: &#8220;This is only one study but it is a useful reminder that what works in the models, simulations and imaginations of tech developers rarely transfers as successfully to real life. The Department of Health and Social Care should take care not to bet the farm on fools&#8217; gold.&#8221;<\/p>\n<p>And Professor Carl Heneghan, director of Oxford University\u2019s Centre of Evidence Based Medicine said: \u201cDoctors have to train for ten years to be consultants, developing the experience and expertise to recognise serious and life-threatening illnesses from the mundane. While AI has a role in areas such as interpreting X-rays and ECG, it is no replacement for a thorough history and examination when it comes to diagnosing disease. The widespread rollout of untested AI can waste resources and, as this study shows, harm patients seeking a diagnosis.\u201d<\/p>\n<p>Last week the government published its 10-Year Plan to shift the NHS from treating illness to preventing it \u2014 a strategy heavily reliant on digital tools, apps and AI to empower patients.<\/p>\n<p>In its manifesto, published this month it stated: \u201cThe plan will bring it (the NHS) into the digital age, making sure staff benefit from the advantages and efficiencies available from new technology\u2026.The government will also use digital telephony so all phone calls to GP practices are answered quickly. For those who need it, they will get a digital or telephone consultation the same day they request it.\u201d<\/p>\n<p>But this research shows the gap between AI\u2019s performance in the lab and its use in the real world. The authors warn that current benchmarks are misleading, and that AI tools must be tested not just on knowledge, but on how well they communicate with non-experts.<\/p>\n<p>\u201cJust because a chatbot can pass the doctor\u2019s exam doesn\u2019t mean it can help you when you\u2019re sick,\u201d said one of the study leads. \u201cIt\u2019s like giving someone a stethoscope and expecting them to perform heart surgery.\u201d<\/p>\n<p>The researchers say future AI tools must be far more proactive \u2014 asking clear, guiding questions and actively managing the conversation, instead of relying on users to know what details are medically important.<\/p>\n<p>The findings echo earlier research showing that even trained doctors didn\u2019t get better at diagnosing patients with AI help. Now we know the same is true for the general public.<\/p>\n<p>Experts are calling for rigorous user trials before deploying AI in healthcare, especially when it comes to direct patient advice. Without this, there\u2019s a real danger that people could be lulled into a false sense of security, putting off seeing a doctor \u2014 or rushing to A&amp;E unnecessarily.<\/p>\n<p>A Department of Health and Social Care spokewo said: Through our 10 Year Health Plan, we&#8217;re slashing bureaucracy across the health service, reducing burdensome administrative tasks and making use of technology so doctors can spend time on what they do best &#8211; caring for patients. This includes rolling out AI scribes to end the need for clinical notetaking, letter drafting, and manual data entry so clinicians can focus on treating patients, saving the same time as adding 2,000 more doctors into general practice.<\/p>\n<p>\u00a0<\/p>\n<p>&#8220;We have also already reduced the amount of repetitive mandatory training resident doctors are required to do and alongside delivering the second above inflation pay increase in a row this year, we have been listening to doctors to make their working lives better. There\u2019s more to do, but the NHS has been making good progress on small changes which have an outsize impact.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Health Editor\u00a0 \u00a0 A major new study has issued a stark warning about the use of AI chatbots&hellip;\n","protected":false},"author":2,"featured_media":247876,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[39985,95934,105,4348,95933,6796,211,95935,16,15,3595,6795],"class_list":{"0":"post-247875","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-ai-in-healthcare","9":"tag-government-health-plan","10":"tag-health","11":"tag-healthcare","12":"tag-medical-chatbots","13":"tag-mens-health","14":"tag-nhs","15":"tag-patient-diagnosis-challenges","16":"tag-uk","17":"tag-united-kingdom","18":"tag-wes-streeting","19":"tag-womens-health"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/114817397391140941","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/247875","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=247875"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/247875\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/247876"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=247875"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=247875"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=247875"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}