{"id":383097,"date":"2025-08-29T18:21:09","date_gmt":"2025-08-29T18:21:09","guid":{"rendered":"https:\/\/www.europesays.com\/uk\/383097\/"},"modified":"2025-08-29T18:21:09","modified_gmt":"2025-08-29T18:21:09","slug":"the-trump-administration-will-automate-health-inequities","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/uk\/383097\/","title":{"rendered":"The Trump Administration Will Automate Health Inequities"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The White House\u2019s AI Action Plan, released in July, mentions \u201chealth care\u201d only three times. But it is one of the most consequential health policies of the second Trump administration. Its sweeping ambitions for AI\u2014rolling back safeguards, fast-tracking \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.ai.gov\/action-plan\" target=\"_blank\" rel=\"noopener\">private-sector-led innovation<\/a>,\u201d and banning <a data-event-element=\"inline link\" href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/07\/preventing-woke-ai-in-the-federal-government\/\" target=\"_blank\" rel=\"noopener\">\u201cideological dogmas such as DEI\u201d<\/a>\u2014will have long-term consequences for how medicine is practiced, how public health is governed, and who gets left behind.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Already, the Trump administration has purged data from government websites, slashed funding for research on marginalized communities, and pressured government researchers to restrict or retract work that contradicts political ideology. These actions aren\u2019t just symbolic\u2014they shape what gets measured, who gets studied, and which findings get published. Now, those same constraints are moving into the development of AI itself. Under the administration\u2019s policies, developers have a clear incentive to make design choices or pick data sets that <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/08\/ai-patriotism\/683995\/\" target=\"_blank\" rel=\"noopener\">won\u2019t provoke political scrutiny<\/a>.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">These signals are shaping the AI systems that will guide medical decision making for decades to come. The accumulation of technical choices that follows\u2014encoded in algorithms, embedded in protocols, and scaled across millions of patients\u2014will cement the particular biases of this moment in time into medicine\u2019s future. And history has shown that once bias is encoded into clinical tools, even obvious harms can take decades to undo\u2014if they\u2019re undone at all.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">AI tools were permeating every corner of medicine before the action plan was released: assisting radiologists, processing insurance claims, even communicating on behalf of overworked providers. They\u2019re also being used to fast-track the discovery of new cancer therapies and antibiotics, while advancing precision medicine that helps providers tailor treatments to individual patients. <a data-event-element=\"inline link\" href=\"https:\/\/www.ama-assn.org\/practice-management\/digital-health\/2-3-physicians-are-using-health-ai-78-2023\" target=\"_blank\" rel=\"noopener\">Two-thirds<\/a> of physicians used AI in 2024\u2014a 78 percent jump from the year prior. Soon, not using AI to help determine diagnoses or treatments could be seen as malpractice.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">At the same time, AI\u2019s promise for medicine is limited by the technology\u2019s shortcomings. One health-care AI model <a data-event-element=\"inline link\" href=\"https:\/\/www.theverge.com\/health\/718049\/google-med-gemini-basilar-ganglia-paper-typo-hallucination\" target=\"_blank\" rel=\"noopener\">confidently<\/a> hallucinated a nonexistent body part. Another may make doctors\u2019 procedural skills <a data-event-element=\"inline link\" href=\"https:\/\/www.thelancet.com\/journals\/langas\/article\/PIIS2468-1253(25)00133-5\/abstract\" target=\"_blank\" rel=\"noopener\">worse<\/a>. Providers are demanding stronger regulatory oversight of AI tools, and some patients are <a data-event-element=\"inline link\" href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC6304538\/\" target=\"_blank\" rel=\"noopener\">hesitant<\/a> to have AI analyze their data.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The stated goal of the Trump administration\u2019s <a data-event-element=\"inline link\" href=\"https:\/\/www.whitehouse.gov\/wp-content\/uploads\/2025\/07\/Americas-AI-Action-Plan.pdf\" target=\"_blank\" rel=\"noopener\">AI Action Plan<\/a> is to preserve American supremacy in the global AI arms race. But the plan also prompts developers of leading-edge AI models to make products free from \u201cideological bias\u201d and \u201cdesigned to pursue objective truth rather than social engineering agendas.\u201d This guidance is murky enough that developers must interpret vague ideological cues, then quietly <a data-event-element=\"inline link\" href=\"https:\/\/apnews.com\/article\/trump-woke-ai-executive-order-bias-f8bc08745c1bf178f8973ac704299bf4\" target=\"_blank\" rel=\"noopener\">calibrate<\/a> what their models can say, show, or even learn to avoid crossing a line that\u2019s never clearly drawn.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Some medical tools incorporate large language models such as ChatGPT. But many AI tools are bespoke and proprietary and rely on narrower sets of medical data. Given how this administration has aimed to restrict data collection at the Department of Health and Human Services and ensure that those data conform to its ideas about gender and race, any health tools developed under Donald Trump\u2019s AI action plan may face pressure to rely on training data that reflects similar principles. (In response to a request for comment, a White House official said in an email that the AI plan and the president\u2019s executive order on scientific integrity together ensure that \u201cscientists in the government use only objective, verifiable data and criteria in scientific decision making and when building and contracting for AI,\u201d and that future clinical tools are \u201cnot limited by the political or ideological bias of the day.\u201d)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Models don\u2019t invent the world they govern; they depend on and reflect the data we feed them. That\u2019s what every research scientist learns early on: garbage in, garbage out. And if governments narrow what counts as legitimate health data and research as AI models are built into medical practice, the blind spots won\u2019t just persist; they\u2019ll compound and calcify into the standards of care.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In the United States, gaps in data have already limited the perspective of AI tools. During the first years of COVID, data on <a data-event-element=\"inline link\" href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC7969142\/\" target=\"_blank\" rel=\"noopener\">race<\/a> and ethnicity were frequently missing from death and vaccination reports. A review of data sets fed to AI models used during the pandemic <a data-event-element=\"inline link\" href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2589750024001468\" target=\"_blank\" rel=\"noopener\">found<\/a> similarly poor representation. Cleaning up these gaps is difficult and expensive\u2014but it\u2019s the best way to ensure the algorithms don\u2019t indelibly incorporate existing inequities into clinical code. After years of advocacy and investment, the U.S. had <a data-event-element=\"inline link\" href=\"https:\/\/www.cdc.gov\/public-health-data-strategy\/php\/about\/index.html\" target=\"_blank\" rel=\"noopener\">finally<\/a> begun to close long-standing <a data-event-element=\"inline link\" href=\"https:\/\/www.pew.org\/en\/research-and-analysis\/articles\/2024\/12\/12\/states-must-modernize-public-health-data-reporting-new-report-finds-promising-practices\" target=\"_blank\" rel=\"noopener\">gaps<\/a> in how we track health and who gets counted.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But over the past several months, that type of fragile progress has been deliberately rolled back. At times, <a data-event-element=\"inline link\" href=\"https:\/\/www.theguardian.com\/us-news\/2025\/feb\/04\/dcd-pages-trump-public-health\" target=\"_blank\" rel=\"noopener\">CDC<\/a> web pages have been rewritten to reflect ideology, not epidemiology. The National Institutes of Health halted funding for projects it labeled as \u201cDEI\u201d\u2014despite never defining <a data-event-element=\"inline link\" href=\"https:\/\/arstechnica.com\/science\/2025\/07\/doge-told-the-nih-which-grants-to-cancel-with-no-scientific-review\/\" target=\"_blank\" rel=\"noopener\">what that actually includes<\/a>. Robert F. Kennedy Jr. has made noise about letting NIH scientists publish only in government-run journals, and <a data-event-element=\"inline link\" href=\"https:\/\/www.trialsitenews.com\/a\/flawed-science-bought-conclusions-the-aluminum-vaccine-study-the-media-wont-question-aaec2793\" target=\"_blank\" rel=\"noopener\">demanded<\/a> the retraction of a rigorous study, published in the Annals of Internal Medicine, that found no link between aluminum and autism. (Kennedy has promoted the opposite idea: that such vaccine ingredients are a cause of autism.) And a recent <a data-event-element=\"inline link\" href=\"https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/08\/improving-oversight-of-federal-grantmaking\/\" target=\"_blank\" rel=\"noopener\">executive order<\/a> gives political appointees control over research grants, including the power to cancel those that don\u2019t \u201cadvance the President\u2019s policy priorities.\u201d Selective erasure of data is becoming the foundation for future health decisions.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">American medicine has seen the consequences of building on such a shaky foundation before. Day-to-day practice has long relied on clinical tools that confuse race with biology. Lung-function testing used race corrections derived from slavery-era plantation medicine, leading to widespread underdiagnosis of serious lung disease in Black patients. In 2023, the <a data-event-element=\"inline link\" href=\"https:\/\/site.thoracic.org\/about-us\/news\/ats-publishes-official-statement-on-race-ethnicity-and-pulmonary-function-test-interpretation\" target=\"_blank\" rel=\"noopener\">American Thoracic Society<\/a> urged the use of a race-neutral approach, yet adoption is uneven, with many labs and devices still defaulting to race-based settings. A kidney-function test used race coefficients that delayed specialty referrals and transplant eligibility. An obstetric <a data-event-element=\"inline link\" href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/39674855\/\" target=\"_blank\" rel=\"noopener\">calculator<\/a> factored in race and ethnicity in ways that increased unnecessary Cesarean sections among Black and Hispanic women.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Once race-based adjustments are baked into software defaults, clinical guidelines, and training, they persist\u2014quietly and predictably\u2014for years. Even now, dozens of flawed decision-making tools that rely on outdated assumptions remain in daily use. Medical devices tell a similar story. Pulse oximeters <a data-event-element=\"inline link\" href=\"https:\/\/www.nejm.org\/doi\/10.1056\/NEJMc2029240?url_ver=Z39.88-2003&amp;rfr_id=ori:rid:crossref.org&amp;rfr_dat=cr_pub%20%200pubmed&amp;__cf_chl_tk=vKs0srF.GwQimpR0kQJPMnFjJEmfEs8hhOib2rtroSE-1756312309-1.0.1.1-dGtq72kZ0UpBKrMZV9mK8KjQTZalW07eLell1Jn8mT8\" target=\"_blank\" rel=\"noopener\">can miss<\/a> dangerously low oxygen levels in darker-skinned patients. During the COVID pandemic, those readings fed into hospital-triage algorithms\u2014leading to disparities in treatment and trust. Once flawed metrics get embedded into \u201cobjective\u201d tools, bias becomes practice, then policy.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">When people in power define which data matter and the outputs are unchallenged, the outcomes can be disastrous. In the early 20th century, the founders of modern statistics\u2014Francis Galton, Ronald Fisher, and Karl Pearson\u2014were also architects of the eugenics movement. Galton, who coined the term eugenics, <a data-event-element=\"inline link\" href=\"https:\/\/www.jstor.org\/stable\/2245329\" target=\"_blank\" rel=\"noopener\">pioneered<\/a> correlation and regression and used these tools to argue that traits like intelligence and morality were heritable and should be managed through selective breeding. Fisher, often hailed as the \u201cfather of modern statistics,\u201d was an active leader in the U.K.\u2019s Eugenics Society and <a data-event-element=\"inline link\" href=\"https:\/\/pmc.ncbi.nlm.nih.gov\/articles\/PMC8115641\/?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">backed its policy<\/a> of \u201cvoluntary\u201d sterilization of those deemed \u201cfeeble-minded.\u201d Pearson, creator of the p-value and chi-squared tests, founded the Annals of Eugenics journal and deployed statistical analysis to argue that Jewish immigrants would become a \u201c<a data-event-element=\"inline link\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/epdf\/10.1111\/j.1469-1809.1925.tb02037.x\" target=\"_blank\" rel=\"noopener\">parasitic race<\/a>.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">For each of these men\u2014and the broader medical and public-health community that supported the eugenics movement\u2014the veneer of data objectivity helped transform prejudice into policy. In the 1927 case Buck v. Bell, the Supreme Court codified their ideas when it upheld compulsory sterilization in the name of public health. That decision has never been formally overturned.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Many AI proponents argue concerns of bias are overblown. They\u2019ll note that bias has been fretted over for years, and to some extent, they\u2019re right: Bias was always present in AI models, but its effects were more limited\u2014in part because the systems themselves were narrowly deployed. Until recently, the number of AI tools used in medicine was small, and most operated at the margins of health care, not at its core. What\u2019s different now is the speed and the scale of AI\u2019s expansion into this field, at the same time the Trump administration is dismantling guardrails for regulating AI and shaping these models\u2019 future.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Human providers are biased, too, of course. Researchers have found that women\u2019s medical concerns are <a data-event-element=\"inline link\" href=\"https:\/\/www.npr.org\/2023\/01\/04\/1146931012\/why-are-womens-health-concerns-dismissed-so-often\" target=\"_blank\" rel=\"noopener\">dismissed<\/a> more often than men\u2019s, and some white medical students falsely believe Black patients have thicker skin or feel less pain. Human bias and AI bias alike can be addressed through training, transparency, and accountability, but the path for the latter requires accounting for both human fallibility and that of the technology itself. Technical fixes exist\u2014reweighing data, retraining models, and bias audits\u2014but they\u2019re often narrow and opaque. Many advanced AI models\u2014especially large language models\u2014are functionally <a data-event-element=\"inline link\" href=\"https:\/\/www.ibm.com\/think\/topics\/black-box-ai\" target=\"_blank\" rel=\"noopener\">black boxes<\/a>: Using them means feeding information in and waiting for outputs. When biases are produced in the computational process, the people who depend on that process are left unaware of when or how they were introduced. That opacity fuels a bias feedback loop: AI amplifies what we put in, then shapes what we take away, leaving humans <a data-event-element=\"inline link\" href=\"https:\/\/www.scientificamerican.com\/article\/humans-absorb-bias-from-ai-and-keep-it-after-they-stop-using-the-algorithm\/\" target=\"_blank\" rel=\"noopener\">more biased<\/a> for having trusted it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">A \u201cmove fast and break things\u201d rollout of AI in health care, especially when based on already biased data sets, will encode similar assumptions into models that are enigmatic and self-reinforcing. By the time anyone recognizes the flaws, they won\u2019t just be baked into a formula; they\u2019ll be indelibly built into the infrastructure of care.<\/p>\n","protected":false},"excerpt":{"rendered":"The White House\u2019s AI Action Plan, released in July, mentions \u201chealth care\u201d only three times. But it is&hellip;\n","protected":false},"author":2,"featured_media":383098,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[4316],"tags":[105,4348,16,15],"class_list":{"0":"post-383097","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-healthcare","8":"tag-health","9":"tag-healthcare","10":"tag-uk","11":"tag-united-kingdom"},"share_on_mastodon":{"url":"https:\/\/pubeurope.com\/@uk\/115113445492623756","error":""},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/383097","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/comments?post=383097"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/posts\/383097\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media\/383098"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/media?parent=383097"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/categories?post=383097"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/uk\/wp-json\/wp\/v2\/tags?post=383097"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}