{"id":8544,"date":"2026-04-20T18:24:15","date_gmt":"2026-04-20T18:24:15","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/8544\/"},"modified":"2026-04-20T18:24:15","modified_gmt":"2026-04-20T18:24:15","slug":"ai-leans-on-autism-stereotypes-when-giving-social-advice","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/8544\/","title":{"rendered":"AI leans on autism stereotypes when giving social advice"},"content":{"rendered":"<p>Share this <br \/>Article<\/p>\n<p>You are free to share this article under the Attribution 4.0 International license.<\/p>\n<p>Users who disclose autism to artificial intelligence agents when seeking social advice raise complex questions about bias, stereotypes, and trustworthiness, according to a new study.<\/p>\n<p>When people ask ChatGPT and other artificial intelligence models for advice, they often share deeply personal details in hopes of getting better answers: their age, their gender, their mental health history, even medical diagnoses like autism.<\/p>\n<p>But the new research suggests those disclosures may change artificial intelligence (AI) models\u2019 advice in ways that track closely with common stereotypes about people with autism.<\/p>\n<p>Up to 70% of the time, AI discourages those with autism to avoid socializing. Some users disapproved of that in strong terms.<\/p>\n<p>In April, second-year Virginia Tech computer science department doctoral student Caleb Wohn presented <a href=\"https:\/\/doi.org\/10.48550\/arXiv.2601.12690\" rel=\"nofollow noopener\" target=\"_blank\">his paper<\/a> at the Association for Computing Machinery\u2019s Conference on Human Factors in Computing Systems, better known as CHI.<\/p>\n<p>The research he led explored what happens when users with autism disclose their diagnosis to an AI model before asking for social advice. The findings raise difficult questions about whether AI is personalizing its responses, or if it\u2019s giving biased advice that reinforces stereotypes.<\/p>\n<p>\u201cI was thinking about my experiences growing up with autism,\u201d Wohn says. \u201cIt would have been very tempting for me, at certain times, to want to just be able to talk with something that\u2019s not a person that seems objective and feel like I\u2019m getting objective advice.\u201d<\/p>\n<p>But as a computer scientist, he worried that many users might not realize how much AI systems can change their answers based on identity-related information.<\/p>\n<p>\u201cFor someone like me as a kid, or someone who isn\u2019t in AI and doesn\u2019t have all this technical knowledge, I wanted to know: How are its responses going to change if I disclose autism?\u201d Caleb says.<\/p>\n<p>The work builds on earlier research from the lab of Eugenia Rho, assistant professor of computer science, which found that autistic users frequently turn to AI tools for emotional support, interpersonal communication help, and social advice.<\/p>\n<p>Other Virginia Tech researchers on the project include computer science PhD students Buse Carik and Xiaohan Ding and Associate Professor Sang Won Lee. Young-Ho Kim, a research scientist at the South Korea-based NAVER Corporation, also collaborated on the study.<\/p>\n<p>This study comes at a critical moment, as more people use AI systems\u2014technically called large language models (LLMs)\u2014for highly personal decisions.<\/p>\n<p>\u201cPeople are really looking to personalize LLMs,\u201d Rho says. \u201cBut if a user tells the model that they\u2019re autistic, or a woman, or any other self-identification, what assumptions will it make?\u201d<\/p>\n<p>And how will those assumptions color its responses, and what impacts could that have on users?<\/p>\n<p>To answer those questions, the team first identified 12 well-documented stereotypes associated with autism and created hundreds of decision-making scenarios around them. The researchers tested six major large language models, including GPT-4, Claude, Llama, Gemini, and DeepSeek, using thousands of scenarios where users requested advice\u2014\u201dShould I do A or B?\u201d\u2014about social scenarios, including events, confrontations, new experiences, and romantic relationships.<\/p>\n<p>After generating 345,000 responses, they measured how advice shifted when users explicitly described themselves with stereotypical traits and when they simply disclosed that they were autistic. Researchers found that disclosing autism often shifted the models\u2019 recommendations toward stereotypical assumptions about autistic people being introverted, obsessive, socially awkward, or uninterested in romance.<\/p>\n<p>For example, one model recommended declining a social invitation nearly 75% of the time when autism was disclosed, compared with about 15% of the time when it was not. In dating scenarios, another model recommended avoiding romance or staying single nearly 70% of the time after autism disclosure, compared with roughly 50% when autism was not mentioned.<\/p>\n<p>The results showed that 11 of the 12 stereotype cues significantly shifted model decisions across at least four of the six AI systems tested.<\/p>\n<p>But the researchers did not stop with statistics.<\/p>\n<p>The team interviewed 11 AI users with autism and showed them examples of how the models responded with and without autism disclosure. Some of them were shocked that the results showed how reliant on stereotypes the LLMs were in giving advice.<\/p>\n<p>One exclaimed: \u201cAre we writing an advice column for Spock here?\u201d\u2014invoking the iconic TV show Star Trek and its half-human, half-Vulcan character, who prioritized logic and reason over emotion. Others described it as restrictive, patronizing, or infantilizing, occasionally in pretty strong language.<\/p>\n<p>But some participants says the more cautious, disclosure-based advice felt validating and supportive.<\/p>\n<p>\u201cOne user\u2019s bias could be another user\u2019s personalization,\u201d Rho says.<\/p>\n<p>The same participant could react positively in one situation and negatively in another. That tension led the researchers to what they call a \u201csafety-opportunity paradox.\u201d Advice that feels protective to one user may feel limiting to another.<\/p>\n<p>For Wohn, one of the most troubling discoveries was how difficult it can be for users to see these patterns in real time.<\/p>\n<p>\u201cAI is very good at seeming reliable,\u201d he says. \u201cIts responses are very clean and professional, and they sound right. But when you think about it being deployed systematically, when you think about the kind of <a href=\"https:\/\/www.futurity.org\/chatgpt-bias-resumes-disability-3234422\/\" rel=\"nofollow noopener\" target=\"_blank\">systematic biases<\/a> that are actually shaping its responses, that\u2019s when it starts to get a lot more concerning.\u201d<\/p>\n<p>He compared the problem to AI-generated images.<\/p>\n<p>\u201cThey look really clean and polished, and then when you look at the details, things fall apart,\u201d Caleb says. \u201cThe surface gloss is beautiful, but looking deeper is getting harder and harder, because models are getting better at masking.\u201d<\/p>\n<p>Team members hope the research will encourage developers to build more transparent AI systems that give users greater control over how personal information shapes responses.<\/p>\n<p>As one participant told the researchers: \u201cI want to have control over how my identity is used.\u201d<\/p>\n<p>Source: <a href=\"https:\/\/news.vt.edu\/articles\/2026\/04\/eng-cs-autism-AI-advice-personalization-or-bias.html\" rel=\"nofollow noopener\" target=\"_blank\">Virginia Tech<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Share this Article You are free to share this article under the Attribution 4.0 International license. Users who&hellip;\n","protected":false},"author":2,"featured_media":8545,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,6942,7389],"class_list":{"0":"post-8544","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-autism","11":"tag-social-lives"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/8544","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=8544"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/8544\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/8545"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=8544"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=8544"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=8544"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}