{"id":73034,"date":"2025-09-19T08:01:07","date_gmt":"2025-09-19T08:01:07","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/73034\/"},"modified":"2025-09-19T08:01:07","modified_gmt":"2025-09-19T08:01:07","slug":"ai-medical-tools-downplay-symptoms-in-women-and-ethnic-minorities-the-irish-times","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/73034\/","title":{"rendered":"AI medical tools downplay symptoms in women and ethnic minorities \u2013 The Irish Times"},"content":{"rendered":"<p class=\"c-paragraph paywall b-it-article-body__text--left\"><a href=\"https:\/\/www.irishtimes.com\/tags\/artificial-intelligence\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/artificial-intelligence\/\">Artificial intelligence<\/a> tools used by <a href=\"https:\/\/www.irishtimes.com\/tags\/medical-use\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/medical-use\/\">doctors<\/a> risk leading to worse health outcomes for women and ethnic minorities, as a growing body of research shows that many large language models (LLMs) downplay the symptoms of these patients.<\/p>\n<p class=\"c-paragraph paywall \">A series of recent studies have found that the uptake of AI models across the healthcare sector could lead to biased medical decisions, reinforcing patterns of under-treatment that already exist across different groups in western societies.<\/p>\n<p class=\"c-paragraph paywall \">The findings by researchers at leading US and British universities suggest that medical AI tools, powered by LLMs, have a tendency to not reflect the severity of symptoms among woman patients, while also displaying less \u201cempathy\u201d towards Black and Asian ones.<\/p>\n<p class=\"c-paragraph paywall b-it-article-body__text--left\">The warnings come as the world\u2019s top AI groups such as <a href=\"https:\/\/www.irishtimes.com\/tags\/microsoft\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/microsoft\">Microsoft<\/a>, <a href=\"https:\/\/www.irishtimes.com\/tags\/amazon\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/amazon\/\">Amazon<\/a>, <a href=\"https:\/\/www.irishtimes.com\/tags\/openai\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/openai\/\">OpenAI <\/a>and <a href=\"https:\/\/www.irishtimes.com\/tags\/google\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/google\/\">Google<\/a> rush to develop products that aim to reduce physicians\u2019 workloads and speed up treatment, all in an effort to help overstretched health systems around the world.<\/p>\n<p class=\"c-paragraph paywall b-it-article-body__text--left\">Many hospitals and doctors globally are using LLMs such as <a href=\"https:\/\/www.irishtimes.com\/tags\/gemini\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/gemini\/\">Gemini<\/a> and <a href=\"https:\/\/www.irishtimes.com\/tags\/chatgpt\/\" target=\"_self\" rel=\"nofollow noopener\" title=\"https:\/\/www.irishtimes.com\/tags\/chatgpt\/\">ChatGPT<\/a> as well as AI medical note-taking apps from start-ups including Nabla and Heidi to auto-generate transcripts of patient visits, highlight medically relevant details and create clinical summaries.<\/p>\n<p class=\"c-paragraph paywall \">In June, Microsoft revealed it had built an AI-powered medical tool it claims is four times more successful than human doctors at diagnosing complex ailments.<\/p>\n<p class=\"c-paragraph paywall \">But research by the MIT\u2019s Jameel Clinic in June found that AI models, such as OpenAI\u2019s GPT-4, Meta\u2019s Llama 3 and Palmyra-Med \u2013 a healthcare focused LLM \u2013 recommended a much lower level of care for woman patients, and suggested some patients self-treat at home instead of seeking help.<\/p>\n<p class=\"c-paragraph paywall \">A separate study by the MIT team showed that OpenAI\u2019s GPT-4 and other models also displayed answers that had less compassion towards Black and Asian people seeking support for mental health problems.<\/p>\n<p class=\"c-paragraph paywall \">That suggests \u201csome patients could receive much less supportive guidance based purely on their perceived race by the model\u201d, said Marzyeh Ghassemi, associate professor at MIT\u2019s Jameel Clinic. <\/p>\n<p class=\"c-paragraph paywall \">Similarly, research by the London School of Economics found that Google\u2019s Gemma model, which is used by more than half the local authorities in the UK to support social workers, downplayed women\u2019s physical and mental issues in comparison with men\u2019s when used to generate and summarise case notes.<\/p>\n<p class=\"c-paragraph paywall \">Prof Ghassemi\u2019s MIT team found that patients whose messages contained typos, informal language or uncertain phrasing were between 7 per cent and 9 per cent more likely to be advised against seeking medical care by AI models used in a medical setting, against those with perfectly formatted communications, even when the clinical content was the same.<\/p>\n<p class=\"c-paragraph paywall \">This could result in people who don\u2019t speak English as a first language or are not comfortable in using technology being unfairly treated.<\/p>\n<p class=\"c-paragraph paywall \">The problem of harmful biases stems partly from the data used to train LLMs. General-purpose models, such as GPT-4, Llama and Gemini, are trained using data from the internet, and the biases from those sources are therefore reflected in the responses. AI developers can also influence how this creeps into systems by adding safeguards after the model has been trained.<\/p>\n<p class=\"c-paragraph paywall \">\u201cIf you\u2019re in any situation where there\u2019s a chance that a Reddit subforum is advising your health decisions, I don\u2019t think that that\u2019s a safe place to be,\u201d said Travis Zack, adjunct professor of University of California, San Francisco, and the chief medical officer of AI medical information start-up Open Evidence.<\/p>\n<p class=\"c-paragraph paywall \">In a study last year, Prof Zack and his team found that GPT-4 did not take into account the demographic diversity of medical conditions, and tended to stereotype certain races, ethnicities and genders. <\/p>\n<p class=\"c-paragraph paywall \">Researchers warned that AI tools can reinforce patterns of under-treatment that already exist in the healthcare sector, as data in health research is often heavily skewed towards men, and women\u2019s health issues, for example, face chronic underfunding and research.<\/p>\n<p class=\"c-paragraph paywall \">OpenAI said many studies evaluated an older model of GPT-4, and the company had improved accuracy since its launch. It had teams working on reducing harmful or misleading outputs, with a particular focus on health. The company said it also worked with external clinicians and researchers to evaluate its models, stress test their behaviour and identify risks.<\/p>\n<p class=\"c-paragraph paywall \">The group has also developed a benchmark together with physicians to assess LLM capabilities in health, which takes into account user queries of varying styles, levels of relevance and detail, it said.<\/p>\n<p class=\"c-paragraph paywall \">Google said it took model bias \u201cextremely seriously\u201d and was developing privacy techniques that can sanitise sensitive data sets and develop safeguards against bias and discrimination. <\/p>\n<p class=\"c-paragraph paywall \">Researchers have suggested that one way to reduce medical bias in AI is to identify what data sets should not be used for training in the first place, and then train on diverse and more representative health data sets.<\/p>\n<p class=\"c-paragraph paywall \">Prof Zack said Open Evidence, which is used by 400,000 doctors in the US to summarise patient histories and retrieve information, trained its models on medical journals, the US Food and Drug Administration\u2019s labels, health guidelines and expert reviews. Every AI output is also backed up with a citation to a source.<\/p>\n<p class=\"c-paragraph paywall \">Earlier this year, researchers at University College London and King\u2019s College London partnered with Britain\u2019s NHS to build a generative AI model, called Foresight.<\/p>\n<p class=\"c-paragraph paywall \">The model was trained on anonymised patient data from 57 million people on medical events such as hospital admissions and Covid-19 vaccinations. Foresight was designed to predict probable health outcomes, such as hospitalisation or heart attacks.<\/p>\n<p class=\"c-paragraph paywall \">\u201cWorking with national-scale data allows us to represent the full kind of kaleidoscopic state of England in terms of demographics and diseases,\u201d said Chris Tomlinson, honorary senior research fellow at UCL, who is the lead researcher of the Foresight team. Although not perfect, Mr Tomlinson said it offered a better start than more general data sets.<\/p>\n<p class=\"c-paragraph paywall \">European scientists have also trained an AI model called Delphi-2M that predicts susceptibility to diseases decades into the future, based on anonymised medical records from 400,000 participants in UK Biobank.<\/p>\n<p class=\"c-paragraph paywall \">But with real patient data of this scale, privacy often becomes an issue. The NHS Foresight project was paused in June to allow the UK\u2019s Information Commissioner\u2019s Office to consider a data protection complaint, filed by the British Medical Association and Royal College of General Practitioners, over its use of sensitive health data in the model\u2019s training.<\/p>\n<p class=\"c-paragraph paywall \">In addition, experts have warned that AI systems often \u201challucinate\u201d \u2013 or make up answers \u2013 which could be particularly harmful in a medical context.<\/p>\n<p class=\"c-paragraph paywall \">But MIT\u2019s Prof Ghassemi said AI was bringing huge benefits to healthcare. \u201cMy hope is that we will start to refocus models in health on addressing crucial health gaps, not adding an extra per cent to task performance that the doctors are honestly pretty good at anyway.\u201d \u2013 Copyright The Financial Times Limited 2025<\/p>\n","protected":false},"excerpt":{"rendered":"Artificial intelligence tools used by doctors risk leading to worse health outcomes for women and ethnic minorities, as&hellip;\n","protected":false},"author":2,"featured_media":73035,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[74],"tags":[1463,289,297,18,13167,823,19,17,50053,305,5921,82],"class_list":{"0":"post-73034","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technology","8":"tag-amazon","9":"tag-artificial-intelligence","10":"tag-chatgpt","11":"tag-eire","12":"tag-gemini","13":"tag-google","14":"tag-ie","15":"tag-ireland","16":"tag-medical-use","17":"tag-microsoft","18":"tag-open-ai","19":"tag-technology"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/73034","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=73034"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/73034\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/73035"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=73034"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=73034"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=73034"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}