AI can simulate the dead — but should it?

AI systems can recreate the voices and personalities of deceased loved ones, offering comfort to some while raising complex questions about grief, identity, and ethical boundaries. Credit: Margaux Jacks

Artificial intelligence is moving into one of the most intimate areas of human life: grief. Tools that can simulate a deceased person’s voice, writing style, or conversational patterns are no longer science fiction. They are emerging products and technologies that promise comfort for some mourners while raising profound ethical, psychological, and cultural questions.

At the University of Virginia School of Data Science, Renée Cummings studies how data ethics intersects with experiences that are deeply human. She sees AI-driven grief technologies as a powerful example of how innovation can reshape social norms around memory, loss, and identity.

Recently, Cummings discussed the monetization of intimacy in artificial intelligence at the World Economic Forum in Davos, Switzerland, where global leaders examined how AI is reshaping the most personal dimensions of human life.

“We know that AI is being used to create intimate experiences,” Cummings said. “These systems can recreate patterns of communication from texts, emails, voice recordings, and social media. For some people, that feels like continuing a relationship rather than remembering one.”

Often called “griefbots” or digital memorial agents, these systems are trained from a person’s digital footprint. They allow users to ask questions, receive simulated responses and interact with an approximation of someone who has died. For some families navigating loss, the appeal is immediate. A familiar tone or phrase can feel grounding during a destabilizing time.

Psychologists say grieving involves gradually learning to reconcile the emotional presence of a loved one with the reality of their absence. Interactive AI complicates that process. By introducing a responsive digital presence, these systems blur the boundary between memory and simulation.

Kimberly D. Acquaviva, the Betty Norman Norris endowed professor of nursing at the University of Virginia, cautions that the long-term cultural impact remains uncertain.

“Because AI grief technologies like ‘You, Only Virtual’ are relatively new, it’s hard to predict the extent to which they might reshape social norms and grieving practices,” Acquaviva said.

“What’s not hard for me to imagine, though, is that these technologies will stimulate interest in legally binding advance directives regarding the posthumous use of our digital likeness or voice.”

She is unequivocal about her own wishes.

“Personally, I would be vehemently opposed to my loved ones creating and or interacting with an AI simulation of me after I died,” she said. “An AI simulation of me wouldn’t be me. It would just be a commodified delusion dangled in front of my grieving loved ones by a corporation eager to turn a profit.”

Researchers have found that some users experience comfort and emotional validation, especially when they feel social pressure to move on more quickly than they are ready. At the same time, experts caution that immersive simulations could intensify denial or prolong distress for vulnerable individuals. Early studies emphasize that outcomes vary widely depending on timing, personality, and context.

Discover the latest in science, tech, and space with over 100,000 subscribers who rely on Phys.org for daily insights.
Sign up for our free newsletter and get updates on breakthroughs,
innovations, and research that matter—daily or weekly.

Acquaviva underscores what may be at stake if technology substitutes for human connection. “A fundamental aspect of the human experience is loving someone deeply,” she said. “Inextricably intertwined with loving someone deeply, however, is grieving when that person dies.”

When reflecting on her own personal loss, Acquaviva said that interacting with an AI simulation of her late wife instead of grieving her death would have made it difficult to adjust to the reality of that loss.

“More importantly,” she said, “it would have impeded my ability to rebuild my life and my identity after her death. A big part of that rebuilding process involved leaning on my family and friends for support.”

Cummings frames the issue not as a simple question of benefit or harm, but as one of responsibility. The data required to build a digital replica often includes years of private communications and behavioral traces. That raises difficult questions about consent and ownership after death.

“Your data is your life,” she said. “If someone has access to that data, they can recreate interactions even after you’ve passed on. We must ask who owns that information, how it is governed, and what protections exist for families and individuals.”

Those concerns extend into the economics of grief. Companies offering AI memorial services operate in a rapidly expanding digital marketplace where emotional vulnerability intersects with monetized products. Scholars studying technology ethics warn that engagement-driven design could unintentionally encourage dependency rather than healing.

For both scholars, the question is not simply whether AI will shape how people mourn, but how intentionally society chooses to respond.

“The technology itself is not deciding how we mourn,” Cummings said. “People are. Our values, our ethics, and our sense of responsibility should guide how these tools are used.”

As AI continues to evolve, so too will conversations about what it means to remember, to honor and to let go. Those conversations, researchers say, are as much about humanity as they are about technology.

Provided by
University of Virginia

Citation:
AI can simulate the dead—but should it? (2026, April 29)
retrieved 29 April 2026
from https://phys.org/news/2026-04-ai-simulate-dead.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.