{"id":1146,"date":"2026-04-09T00:25:15","date_gmt":"2026-04-09T00:25:15","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/1146\/"},"modified":"2026-04-09T00:25:15","modified_gmt":"2026-04-09T00:25:15","slug":"new-study-shows-explainability-is-a-must-for-older-adults-to-trust-ai","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/1146\/","title":{"rendered":"New Study Shows Explainability is a Must for Older Adults to Trust AI"},"content":{"rendered":"<p>Newswise \u2014 Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults aren\u2019t likely to trust them.<\/p>\n<p>That\u2019s one of the main findings from a study by AI Caring on what older adults expect from explainable AI (XAI).<\/p>\n<p><a href=\"https:\/\/ai-caring.org\/\" rel=\"nofollow noopener\" target=\"_blank\">AI Caring<\/a> is one of three AI Institutions led by Georgia Tech and funded by the National Science Foundation (NSF). The institution supports AI research that benefits older adults and their caregivers.<\/p>\n<p>Niharika Mathur, a Ph.D. candidate in the School of Interactive Computing, was the lead author of a paper based on the study. The paper will be presented in April at the <a href=\"https:\/\/chi2026.acm.org\/\" rel=\"nofollow noopener\" target=\"_blank\">2026 ACM Conference on Human Factors in Computing Systems (CHI) in Barcelona<\/a>.<\/p>\n<p>Mathur worked with the <a href=\"https:\/\/empowerment.emory.edu\/\" rel=\"nofollow noopener\" target=\"_blank\">Cognitive Empowerment Program at Emory University<\/a> to interview 23 older adults who live alone and use voice-activated AI assistants like Amazon\u2019s Alexa and Google Home.<\/p>\n<p>Many of them told her they feel excluded from the design of these products.<\/p>\n<p>\u201cThe assumption is that all people want interactions the same way and across all kinds of situations, but that isn\u2019t true,\u201d Mathur said. \u201cHow older people use AI and what they want from it are different from what younger people prefer.\u201d<\/p>\n<p>One example she gave is that young people tend to be informal when talking with AI. Older people, on the other hand, talk to the agent like they would a person.<\/p>\n<p>\u201cIf Older adults are talking to their family members about Alexa, they usually refer to Alexa as \u2018she\u2019 instead of \u2018it,\u2019\u201d Mathur said. \u201cThey tend to humanize these systems a lot more than young people.\u201d<\/p>\n<p>Good Explanations<\/p>\n<p>The study evaluated AI explanations that drew information from four sources of data:<\/p>\n<p>User history (past conversations with the agent)Environmental data (indoor temperature or the weather forecast)Activity data (how much time a user spends in different areas of the home)Internal reasoning (mathematical probabilities and likely outcomes)<\/p>\n<p>Mathur said older users trust the agent more when it bases its explanations on data from the first three sources. However, internal reasoning creates skepticism.<\/p>\n<p>Internal reasoning means the AI doesn\u2019t have enough data from the other sources to give an explanation. It provides a percentage to reflect its confidence based on what it knows.<\/p>\n<p>\u201cThe overwhelming response was negative toward confidence scores,\u201d Mathur said. \u201cIf the AI says it\u2019s 92% confident, older adults want to know what that\u2019s based on.\u201d<\/p>\n<p>This is another example that Mathur said points to generational preferences.<\/p>\n<p>\u201cThere\u2019s a lot of explainable AI research that shows younger people like to see numbers in explanations, and they also tend to rely too much on explanations that contain numerical confidence. Older adults are the opposite. It makes them trust it less.\u201d<\/p>\n<p>Knowing the Context<\/p>\n<p>Mathur said that AI agents interacting with older adults should serve a dual purpose. They should provide users with companionship and support independence while reducing the caretaking burden often placed on family members.\u00a0<\/p>\n<p>Some studies have shown that engineers have tended to favor caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are merely a box to be checked.<\/p>\n<p>She discovered that in urgent situations, older users prefer the AI to be straightforward, while in casual settings, they desire more conversation.<\/p>\n<p>\u201cHow people interact with technological systems is grounded in what the stakes of the situation are,\u201d she said. \u201cIf it had anything to do with their immediate sense of safety, they did not want conversational elaboration. They want the AI to be very direct and factual.\u201d<\/p>\n<p>Not Just Checking Boxes<\/p>\n<p>Mathur said AI agents that interact with older adults are ideally constructed with a dual purpose. They should provide companionship and autonomy for the users while alleviating the burden of caretaking that is often placed on their family members.\u00a0<\/p>\n<p>Some studies have shown that engineers have strayed toward favoring caretakers in the design of these tools. They prioritize daily tasks and routines, leaving some older adults to feel like they are a box to be checked.<\/p>\n<p>\u201cThey\u2019re not being thought of as consumers,\u201d Mathur said. \u201cA lot of products are being made for them but not with them.\u201d<\/p>\n<p>She also said psychological well-being is one of the most important outcomes these tools should produce.\u00a0<\/p>\n<p>Showing older adults that they are listened to can significantly help in gaining their trust. Some interviewees told Mathur they want agents who are deliberate about understanding their preferences and don\u2019t dismiss their questions.<\/p>\n<p>Meeting these needs reduces the likelihood of protesting and creating conflict with family members.<\/p>\n<p>\u201cIt highlights just how important well-designed explanations are,\u201d she said. \u201cWe must go beyond a transparency checklist.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"Newswise \u2014 Voice-activated, conversational artificial intelligence (AI) agents must provide clear explanations for their suggestions, or older adults&hellip;\n","protected":false},"author":2,"featured_media":1147,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[24,25,1443,865,1442,373,1444,134],"class_list":{"0":"post-1146","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificial-intelligence-aiolder-adultshuman-ai-interactionai-transparency","11":"tag-ethics-and-research-methods","12":"tag-georgia-institute-of-technology","13":"tag-newswise","14":"tag-stem-education","15":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/1146","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=1146"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/1146\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/1147"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=1146"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=1146"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=1146"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}