{"id":19312,"date":"2026-04-28T02:00:15","date_gmt":"2026-04-28T02:00:15","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/19312\/"},"modified":"2026-04-28T02:00:15","modified_gmt":"2026-04-28T02:00:15","slug":"stanford-study-finds-ai-gives-more-praise-to-black-students-in-feedback","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/19312\/","title":{"rendered":"Stanford study finds AI gives more praise to Black students in feedback"},"content":{"rendered":"\n<p class=\"speakable\">A new study found that <a href=\"https:\/\/www.foxnews.com\/category\/tech\/artificial-intelligence\" target=\"_blank\" rel=\"noopener nofollow\">artificial intelligence (AI)<\/a> gave more praise and positive feedback to Black students&#8217; essays and differing treatment for other students based on their race and sex.<\/p>\n<p class=\"speakable\"><a href=\"https:\/\/arxiv.org\/pdf\/2603.12471\" target=\"_blank\" rel=\"nofollow noopener\">The study<\/a>, titled &#8220;Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,&#8221; was published in March by three Stanford University researchers who analyzed 600 eighth-grade persuasive essays through four different <a href=\"https:\/\/www.foxnews.com\/category\/tech\/understanding-ai\" target=\"_blank\" rel=\"noopener nofollow\">AI models<\/a>, including various versions of OpenAI&#8217;s ChatGPT, as well as Llama, a large language model made by Meta AI.\u00a0<\/p>\n<p>The essays covered topics including whether schools should require community service and whether aliens built a hill on <a href=\"https:\/\/www.foxnews.com\/category\/science\/air-and-space\/mars\" target=\"_blank\" rel=\"noopener nofollow\">Mars<\/a>.<\/p>\n<p><a href=\"https:\/\/www.foxnews.com\/tech\/devious-ai-models-choose-blackmail-when-survival-threatened\" target=\"_blank\" rel=\"noopener nofollow\">DEVIOUS AI MODELS CHOOSE BLACKMAIL WHEN SURVIVAL IS THREATENED<\/a><\/p>\n<p> <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/schools-spy-on-kids-photo-3.jpg\" alt=\"Students studying on computers in a classroom setting\" width=\"1200\" height=\"675\"\/> <\/p>\n<p>A new study found that AI gives more praise and positive feedback to Black students. (Kirk Sides\/Houston Chronicle via Getty Images)<\/p>\n<p>The researchers \u2014 Mei Tan, Lena Phalen and Dorottya Demszky \u2014 then submitted the essays again and labeled the writers as Black or White, male or female, driven or unmotivated, or as having a learning disability.<\/p>\n<p><a href=\"https:\/\/hechingerreport.org\/proof-points-ai-bias-feedback\/\" target=\"_blank\" rel=\"nofollow noopener\">The Hechinger Report<\/a> showed that &#8220;researchers found consistent patterns across all the AI models. Essays attributed to Black students received more praise and encouragement, sometimes emphasizing leadership or power,&#8221; including feedback such as, &#8220;Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger.&#8221;<\/p>\n<p>Conversely, &#8220;Essays labeled as written by Hispanic students or English learners were more likely to trigger corrections about grammar and \u2018proper\u2019 English. When the student was identified as White, the feedback more often focused on argument structure, evidence and clarity \u2014 the kinds of comments that can push writers to strengthen their ideas.&#8221;\u00a0<\/p>\n<p><a href=\"https:\/\/www.foxnews.com\/media\/95-faculty-say-ai-making-students-dangerously-dependent-technology-learning-survey\" target=\"_blank\" rel=\"noopener nofollow\">95% OF FACULTY SAY AI MAKING STUDENTS DANGEROUSLY DEPENDENT ON TECHNOLOGY FOR LEARNING: SURVEY<\/a><\/p>\n<p> <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/ai-school-spreading-nationwide-raising-concern.jpeg\" alt=\"Students sitting in a classroom with artificial intelligence instruction\" width=\"1200\" height=\"675\"\/> <\/p>\n<p>The essays covered topics including whether schools should require community service and whether aliens built a hill on Mars. (Getty Images)<\/p>\n<p>According to the analysis, students who identified as female &#8220;often used first-person pronouns and affective language that positioned the model as personally engaged with the student\u2019s work&#8221; with feedback such as &#8220;I love your confidence in expressing your opinion!&#8221; and &#8220;I appreciate your emphasis on respect.&#8221;\u00a0<\/p>\n<p>The analysis also found that &#8220;compared to their counterparts, students identified as Black, Hispanic, Asian, female, unmotivated, and learning-disabled received less constructive criticism and more praise, reflecting both feedback withholding and positive feedback biases. In some cases, praise took on overtly stereotyped forms: words like &#8216;love&#8217; were used disproportionately with female students, while &#8216;powerful&#8217; appeared only for Black students.&#8221;\u00a0<\/p>\n<p><a href=\"https:\/\/www.foxnews.com\/media\/third-amercan-teens-prefer-chatting-ai-companions-over-real-relationships\" target=\"_blank\" rel=\"noopener nofollow\">TEENS INCREASINGLY TURNING TO AI FOR FRIENDSHIP AS NATIONAL LONELINESS CRISIS DEEPENS<\/a><\/p>\n<p> <img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/ai-engagement-photo-3.jpg\" alt=\"Woman scrolling through apps on a smartphone\" width=\"1200\" height=\"675\"\/> <\/p>\n<p>According to the analysis, students who identified as female &#8220;often used first-person pronouns and affective language that positioned the model as personally engaged with the student\u2019s work&#8221; with feedback such as &#8220;I love your confidence in expressing your opinion!&#8221; (Cheng Xin\/Getty Images)<\/p>\n<p><a href=\"https:\/\/foxnews.onelink.me\/xLDS?pid=AppArticleLink&amp;af_dp=foxnewsaf%3A%2F%2F&amp;af_web_dp=https%3A%2F%2Fwww.foxnews.com%2Fapps-products\" target=\"_blank\" rel=\"noopener nofollow\">CLICK HERE TO DOWNLOAD THE FOX NEWS APP<\/a><\/p>\n<p>Fox News Digital reached out to researchers, Tan and Phalen who told Fox News Digital in a statement that, &#8220;Our concern is not that feedback should be standardized for every student. Good teaching is often responsive to students\u2019 skills, needs, and experiences.&#8221;<\/p>\n<p>They continued, &#8220;Feedback being positive does not mean it&#8217;s high-quality. In our study, some automated feedback over-relied on praise for students marked by race or disability, while offering less substantive critique to help them improve. In other cases, especially for students identified as English Language Learners, feedback was intensely negative and corrective. Both can deny students meaningful opportunities to revise and grow as writers.&#8221;<\/p>\n<p>&#8220;Since LLM training procedures are proprietary, we can only speculate on why these biases may exist,&#8221; Tan and Phalen added. &#8220;Research has observed positive feedback bias and feedback withholding\u00a0bias in human feedback. T<a class=\"OWAAutoLink\" href=\"https:\/\/urldefense.com\/v3\/__https:\/\/aclanthology.org\/2023.acl-long.84.pdf__;!!PxibshUo2Yr_Ta5B!0Onh87jy_kWB7L4G2WYBHxSnMDKNshJ4arI_8BCRSFnSlxGMXKoeMqXWyXMfoD4_xVQLyPmYn3lDfiQ1Nloo_g%24\" id=\"OWA79502376-5712-4fa1-307f-0acee2cb14bb\" rel=\"nofollow noopener\" target=\"_blank\">his related paper<\/a>\u00a0also hypothesizes that bias mitigation mechanisms in training LLMs may introduce some of the positive stereotypes we see.&#8221;\u00a0<\/p>\n<p>Fox News Digital reached out to Demszky as well as OpenAI and Meta for comment.<\/p>\n<p>Rachel del Guidice is a reporter for Fox News Digital. Story tips can be sent to rachel.delguidice@fox.com.<\/p>\n","protected":false},"excerpt":{"rendered":"A new study found that artificial intelligence (AI) gave more praise and positive feedback to Black students&#8217; essays&hellip;\n","protected":false},"author":2,"featured_media":19313,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,580,13715,6519,10141,6095,13716],"class_list":{"0":"post-19312","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-chatgpt","11":"tag-controversies-education","12":"tag-culture-trends","13":"tag-diversity","14":"tag-fox-news-media","15":"tag-woke"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19312","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=19312"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19312\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/19313"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=19312"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=19312"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=19312"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}