{"id":27138,"date":"2025-08-27T19:23:08","date_gmt":"2025-08-27T19:23:08","guid":{"rendered":"https:\/\/www.europesays.com\/ie\/27138\/"},"modified":"2025-08-27T19:23:08","modified_gmt":"2025-08-27T19:23:08","slug":"patient-community-must-have-a-voice-in-public-health-ai","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ie\/27138\/","title":{"rendered":"Patient community must have a voice in public health AI"},"content":{"rendered":"<p>In both health care and public health \u2014 distinct but overlapping fields \u2014 generative AI is already reshaping how systems operate. Clinical settings are leveraging generative AI tools to\u00a0<a href=\"https:\/\/www.statnews.com\/2025\/08\/20\/ai-scribe-use-doctors-health-insurance-bills-may-rise\/\" rel=\"nofollow noopener\" target=\"_blank\">draft clinical notes<\/a>\u00a0and\u00a0<a href=\"https:\/\/www.massgeneralbrigham.org\/en\/about\/newsroom\/press-releases\/pitfalls-and-opportunities-for-generative-ai-in-patient-messaging-systems\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">messages to patients<\/a>, while the public health sector is exploring the ways these systems can\u00a0<a href=\"https:\/\/www.rti.org\/insights\/AI-in-health-communications\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">tailor health messaging to different communities<\/a>.<\/p>\n<p>Yet across health care and public health, adoption of generative AI tools has often come with limited transparency and oversight, and little to no engagement with patients and communities, particularly those most impacted by structural inequity.<\/p>\n<p>While centering community voices and priorities should be essential in generative AI development, governance, and implementation in any sector or domain, it is particularly critical in health. Health is deeply personal, it can be precarious, and, in the U.S., it continues to be shaped by structural exclusion and harm, rooted in long-standing legacies and sustained by current policies and structures that marginalize racially minoritized, queer, transgender, and disabled people. When AI systems are not designed and implemented with this context in mind, the potential for harm is profound.\u00a0<\/p>\n<p>We have already seen this happen with predictive (non-generative) AI models. One\u00a0widely used algorithm in health care <a href=\"https:\/\/www.science.org\/doi\/10.1126\/science.aax2342\" target=\"_blank\" rel=\"noopener nofollow\">underestimated<\/a> the need for follow-up care for\u00a0Black\u00a0patients, referring\u00a0Black\u00a0patients for fewer services than their white counterparts, while overestimating medical need for white patients, leading to more referrals for white patients. This occurred because the model used health care expenditures as a proxy for medical need, without accounting for the fact that, due to structural racism and its attendant barriers to care (lower rates of health insurance, provider bias, etc.),\u00a0Black\u00a0patients often receive less care and, as a result, spend less on services.<\/p>\n<p>Similarly, disabled people have experienced harm from AI-enabled risk stratification tools that deprioritized disabled people for\u00a0<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC8193195\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Covid-19 treatment<\/a>, based on assumptions about quality of life and life expectancy. These tools \u2014 many trained on incomplete, non-vetted, and biased data \u2014 risk reinforcing and even exacerbating existing health inequities. And because they often operate in the background, there is often limited transparency and little to no accountability.\u00a0<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/08\/EmbeddedBias_Wordpress_FeaturedImageP6-1-768x432.png\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2024\/09\/11\/embedded-bias-series-artificial-intelligence-risks-of-bias-in-medical-data\/\" rel=\"nofollow noopener\" target=\"_blank\">AI threatens to cement racial bias in clinical algorithms. Could it also chart a path forward?<\/a><\/p>\n<p>If community members and advocates were involved in the design of the referral-for-services algorithm, they likely would have raised concerns about and challenged the assumption that lower spending equates to less need. This could have then prompted the developers to use more equity-informed metrics in their assessment of the AI model. They would have also likely recommended continuously monitoring the model\u2019s outcomes for potential unexpected inequities in referral rates across demographic groups. Instead of the model\u2019s exclusive focus on streamlining referrals and reducing costs, a community-informed approach might have prioritized ensuring equitable access to services instead.<\/p>\n<p>Generative AI is beginning to make inroads into public health. For example, with the help of generative AI, the CDC has\u00a0<a href=\"https:\/\/fedscoop.com\/cdc-generative-ai-pilots-school-closure-tracking-website-updates\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">used social media data to monitor school closures<\/a>\u00a0to detect potential emerging outbreaks and\u00a0<a href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC9992514\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">forecast overdose trends<\/a>.<\/p>\n<p>As such generative AI tools become more accessible, public health systems, already chronically underfunded and overstretched, may adopt off-the-shelf generative AI models, resulting in less flexibility to incorporate community input and governance. However, this slower uptake of generative AI in public health provides a unique opportunity to embed community accountability before more of these models are fully scaled.<\/p>\n<p>Decision-making about AI in health must include communities affected by those systems, many of whom have been excluded from any decision-making regarding this technology. We propose that generative AI be conceived and developed from the ground up, not from the top down, as is currently the case.<\/p>\n<p>Some frameworks are already pointing in this direction. Zainab Garba-Sani\u2019s\u00a0<a href=\"https:\/\/www.healthaffairs.org\/content\/forefront\/c-c-e-s-s-ai-new-framework-advancing-health-equity-health-care-ai\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ACCESS AI framework<\/a>\u00a0offers a clear example. Designed for health care environments, it emphasizes community engagement, identification of barriers to AI use, and embedding equity throughout the clinical AI development and implementation cycle.<\/p>\n<p>To further advance community-centered governance and decision-making in generative AI in health, we recently launched <a href=\"https:\/\/www.healthjustice.co\/our-services\/innovation-lab\/\" target=\"_blank\" rel=\"noopener nofollow\">the Grounded Innovation Lab @ Health Justice<\/a>, with the aim of maximizing accountability, equity, and transparency.<\/p>\n<p>In practice, communities would determine what qualifies as training data, for example, approving community-based narratives and rejecting sources that are biased, stigmatizing, or obtained without consent. Community governance groups would shape how \u00a0generative AI in health should be evaluated, broadening the definition of AI model performance beyond technical metrics to include community priorities like trust and confidence.<\/p>\n<p>\t\t\t<img decoding=\"async\" width=\"768\" height=\"432\" src=\"https:\/\/www.europesays.com\/ie\/wp-content\/uploads\/2025\/08\/GettyImages-2225853634-768x432.jpg\" class=\"attachment-article-main-medium-large size-article-main-medium-large wp-post-image\" alt=\"\" loading=\"lazy\"  \/>\t\t<\/p>\n<p>\t\t\t\t<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.statnews.com\/wp-content\/themes\/stat\/images\/home\/statplus.svg\" width=\"19\" height=\"16\" alt=\"\"\/><br \/>\n\t\t\t\t<a href=\"https:\/\/www.statnews.com\/2025\/07\/29\/trump-ai-order-dei-bias-health-care\/\" rel=\"nofollow noopener\" target=\"_blank\">UC San Diego Health\u2019s Karandeep Singh on how Trump\u2019s \u2018woke AI\u2019 executive order weaponizes DEI<\/a><\/p>\n<p>And, importantly, lived experience engaging with these systems would count as feedback data for improving AI model performance over time. To ensure truly meaningful governance, community members would meet regularly with health AI developers and other stakeholders with convenings designed for accessibility including hybrid options. Members would be compensated for their time and expertise, and recruitment would draw on established partnerships with trusted community-based organizations.<\/p>\n<p>The Grounded Innovation Lab\u2019s focus extends beyond health care to include public health, recognizing public health\u2019s resource constraints and population-level functions. This latter point, in particular, makes community governance of AI systems necessary, not optional. We also recognize the severe environmental cost of AI, especially the siting of water- and energy-intensive data centers in racially minoritized and low-income communities, which also include high proportions of people who are disabled. These communities have already been burdened by the deleterious impact of health inequities; they are now bearing the environmental costs of generative AI\u2019s exponential rise. Our perspective is informed by broader critiques of the AI field, including frameworks such as Timnit Gebru\u2019s and \u00c9mile P. Torres\u2019\u00a0<a href=\"https:\/\/www.dair-institute.org\/projects\/tescreal\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TESCREAL<\/a>\u00a0that call attention to AI systems\u2019 current harms rather than their hypothetical future risks.<\/p>\n<p>With these already observed harms, communities and technologists can work together to determine alternative approaches that harness the benefits of generative AI, such as the role of small or domain-specific language models that can run on personal mobile devices, reducing the environmental harms while improving accessibility.<\/p>\n<p>There is a real and immediate need to reduce the harm of these systems while also exploring their potential benefits for advancing health equity.\u00a0As part of the reconciliation process on the Trump tax bill, the Senate recently rejected a proposed ban on state regulation of AI, highlighting the importance of community-centered approaches in AI design and governance.\u00a0To avoid deepening existing health inequities, we need investments in community-led models for the design, implementation, and governance of generative AI systems, and AI systems, more generally, in health. Community-centered participatory processes and accountability structures can prevent harm before it happens and help ensure that communities most impacted by health inequities shape the technology that they interact with every day.<\/p>\n<p>Health has the potential to be a proving ground for a more equitable, transparent, accountable, and community-centered AI field, both within the sector and beyond.<\/p>\n<p>Oni\u00a0Blackstock, M.D., is a former computer scientist, physician, researcher, and public health and health equity leader. She is the founder and executive director of <a href=\"https:\/\/www.healthjustice.co\/\" target=\"_blank\" rel=\"noopener nofollow\">Health Justice<\/a>, a racial and health equity consulting firm. Akinfe Fatou, M.S.W., is a disability justice advocate, strategist, and founder and CEO of <a href=\"https:\/\/www.cre8tivecadence.com\/\" target=\"_blank\" rel=\"noopener nofollow\">Cre8tive Cadence Consulting<\/a>, a disability-led social impact consulting firm.<\/p>\n","protected":false},"excerpt":{"rendered":"In both health care and public health \u2014 distinct but overlapping fields \u2014 generative AI is already reshaping&hellip;\n","protected":false},"author":2,"featured_media":27139,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[275],"tags":[289,22091,18,135,475,22092,474,19,17,2101],"class_list":{"0":"post-27138","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-artificial-intelligence","9":"tag-diversity-and-inclusion","10":"tag-eire","11":"tag-health","12":"tag-health-care","13":"tag-health-care-disparities","14":"tag-healthcare","15":"tag-ie","16":"tag-ireland","17":"tag-public-health"},"share_on_mastodon":{"url":"","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/27138","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/comments?post=27138"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/posts\/27138\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media\/27139"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/media?parent=27138"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/categories?post=27138"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ie\/wp-json\/wp\/v2\/tags?post=27138"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}