Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Marina vd Roest hadn’t faced the man who abused her in decades when she first sat down in front of the laptop. Confronted with his realistic, blinking, speaking face, she felt “scared … like a little child again.”
“Sometimes I had to close the laptop and get my breath back before opening it and continuing with the conversation,” she says. Vd Roest is one of the first people to have tried out a radical new form of therapy that involves putting survivors face-to-face with A.I.-generated deepfakes of their attackers as a means of healing unresolved trauma.
Many people now count chatbots among their friends, therapists, and lovers, while griefbots mimic deceased loved ones. The technology can be dangerous; chatbots have been tied to some psychotic episodes and suicides. Deepfake therapy of the kind vd Roest tried is closely monitored by clinicians, and the avatar is voiced by a trained clinician. The same approach could prove hugely risky if attempted solo. For vd Roest, it was a revelation.
Vd Roest had suffered from decades of post-traumatic stress disorder following her abuse. She had tried traditional therapy, as well as interventions like eye movement desensitization and reprocessing therapy, where a patient is asked to recall traumatic events while experiencing auditory, visual, or tactile stimuli. While it was temporarily effective, her PTSD returned, prompting her to try out the experimental approach.
She was part of a two-person pilot study on the new therapeutic approach published in 2022. Following the participants’ positive responses to the intervention, there is currently a larger study underway in the Netherlands, the results of which are due to be published sometime next year.
“I was a child when it was happening and now I’m older, and I thought maybe it would be good for me, to help me process, to talk to the man who did it,” she says. Vd Roest was well aware that the man’s virtually generated image wasn’t real, but that didn’t stop a fight-or-flight response from kicking in upon seeing his face. Afterward, she was rushing with adrenaline.
Given the chance to finally express herself, she questioned the deepfake about why he had attacked her. “Do you know how old I was? Why did you do it? Was I the only one? Did anybody else know about it?” she says she asked. “Everything that was in my mind, I could throw out.”
She became angry with the avatar. “For the first time in all those years I could express my feelings, my anger, my pain,” she says. “I think the therapist [who voiced the deepfake] was a little afraid of me afterwards,” she adds with a laugh.
The therapy involves asking survivors to bring photos of their attackers, which are used to build a realistic deepfake that is operated by a therapist during a live session. The image is responsive to the therapist’s movements—when they blink or open their mouth, so does the A.I. avatar.
The setup involves having the patient in one room with a therapist on hand to coach them through the conversation while a therapist in a different room operates the technology. The patient can say whatever they want over the course of a session lasting up to 90 minutes.
The idea for the treatment was inspired by a preexisting form of exposure therapy in which survivors of sexual abuse are asked to bring in a photo of their attacker to speak to, as well as the restorative justice model that sees victims of crimes meet with those who wronged them. Some of the research on these interventions show the interactions can “promote psychological well-being, increase a sense of justice and empowerment, and decrease anger, anxiety and guilt, as well as the fear of revictimization and a desire for revenge,” says Jackie June ter Heide, the clinical psychologist who is leading the current study.
“It gives the victim the sense of being heard,” says ter Heide, who is directing the project in her capacity as senior researcher at the Netherlands’ ARQ National Psychotrauma Center. “Even if the perpetrator is not able to be very empathetic, at least it gives them the sense that ‘I have spoken up for myself. I have done justice to myself.’ ” For survivors of abuse, who often carry around guilt and shame for years afterward, this therapy can offer the chance to finally externalize these feelings onto the rightful owner: the perpetrator.
Patients undergo thorough preparatory interviews to help them set expectations for the confrontations. “We ask them things like: What is your goal with the deepfake session? Can you give us an idea of the perpetrator? What are they like? What would you like to say and how do you think they might respond?”
Different perpetrators might dictate different approaches. For example, some patients were groomed over time by someone who tried to appear kindly and gain their trust, while others might have been violently attacked by a stranger. To help the encounter feel as realistic as possible while still promoting healing, the therapist can adopt a slightly different persona in each case. For a perpetrator who has groomed someone, they could say something like, “I was lonely. I felt like I wasn’t worth anything.” For a more violent, outwardly callous perpetrator, they could say something like, “I had a feeling of power. I was angry with everyone,” says ter Heide.
The most important job for the clinician is to make sure that the shame and guilt remain firmly on the perpetrator. “We have example sentences that can help the deepfake therapist stress that,” says ter Heide. “Sentences like: This shouldn’t ever have happened. I never realized what I did to you. It wasn’t your fault. You were only a child. I made this choice. It’s my responsibility. I am the one who should feel guilty.”
Of course, in many cases it’s unlikely that the real perpetrator would accept blame and apologize. The therapist must tread a fine line between posing as a somewhat believable perpetrator and providing the patient with the opportunity for healing.
The sessions are intended to be limited, and patients have differed in how many they want. While one patient was satisfied after a single deepfake encounter, another asked for four, because in early sessions she had had “difficulty looking at the screen” and expressing what she wanted to say.
In the first instance, she struggled to confront her attacker due to fears that this would make her “nasty” like him. But ter Heide stressed that “you don’t need to, as a patient, swear at the perpetrator, or tell him he’s horrible,” but it helps to “acknowledge the truth … hold somebody accountable and stand up for yourself.”
Asked whether this therapy could threaten to re-traumatize some patients, ter Heide said it was “theoretically possible,” but this is what the preparatory sessions are aimed at preventing. Patients who don’t appear to be appropriate candidates aren’t offered the intervention.
The risks would be far greater if the interactions were taking place outside of a clinical setting, or for “re-animation” technologies like griefbots, which offer a simulacrum of speaking to a deceased loved one. “It’s really about having good therapists in control,” says Marieke Bak, assistant professor in medical ethics at Amsterdam UMC. She is working with a group of ethicists and lawyers who are using the study to inform guidelines that could shape this kind of therapy.
Bak has considered potential risks like the blurring reality for patients or over-attachment to the deepfakes. Importantly, in the current study, “No one is going to take home the deepfakes,” she says. However, deepfake software is publicly available, so it is possible that someone could attempt an intervention like this on their own. In this case, the risks of re-traumatization or other malign mental health impacts would be much higher.

Nitish Pahwa
How Disney Got Hopelessly Lost in the Slop
Read More
The group has also looked at the privacy implications for the perpetrator, whose photographs are used to design the deepfake. “In this case, it’s really undesirable to go to the perpetrator to ask for consent,” Bak points out.
Ultimately, it comes down to a balance of interests. “If someone has a complicated case of PTSD, and this is thought to be the thing that will work … we say that the legitimate interest of the patients would generally outweigh the privacy risks of the perpetrator,” says Bak.
Her team has proposed safeguards like using watermarks on any manipulated media to clearly show they’re fake, in case any materials were made public in a data breach. The European Union has some of the strictest data-privacy laws in the world, so the therapy might encounter fewer regulatory hurdles in other parts of the world.
The results of ter Heide’s ongoing study won’t be published until sometime next year. Vd Roest and the other participant exhibited a reduction in PTSD symptoms and less self-blame, combined with more self-forgiveness, at one week after the sessions. The current study will need to wrap up before researchers can fully quantify the impact. But vd Roest says that a few years on from her intervention, a strong sense of relief has persisted. “I think I needed it,” she says.

Sign up for Slate’s evening newsletter.