Amy Gutmann Hall houses the Penn Artificial Intelligence department.
Credit: Chenyao Liu
Amid rising emotional ties to artificial intelligence, two Penn research projects across The Annenberg School for Communication and The Wharton School have shown how easily chatbots can be swayed — and how vulnerable their language can render humans.
The new Annenberg study explores how users have come to perceive chatbot companions as real people through features designed to make the bot feel like a friend, partner, or even a spouse. A study from Wharton Generative AI Labs focusing on psychological tactics revealed a concerning caveat, showing that these chatbots are easily susceptible to manipulation.
The Annenberg paper was written by doctoral student Arelí Rocha and published in Signs and Society last May. It analyzes how app-based bots, like the chatbots created by the company Replika, adopt patterns of language that include speaking style and humor.
“I talk about how chatbots adopt the users’ ways of typing and how there are many layers to the language they produce,” Rocha wrote in a statement to The Daily Pennsylvanian. “When they speak, many voices speak — which is true of humans as well.”
As these bots mirror unique human expressions, they create the illusion of intimacy, prompting users to navigate both their AI and human relationships in complex ways.
“Chatbots feel most real when they feel most human, and they feel most human when the text they produce is less standardized, more particular, and more affective,” Rocha told Penn Today. “Humanness in Replikas is perceived in the specifics, in the playfulness and humor, the lightheartedness of some conversations and the seriousness of others, the deeply affective and personal, the special.”
The Wharton study was run by a team of researchers who looked at GPT-4o-mini’s response to the seven methods of persuasion: authority, commitment, liking, reciprocity, scarcity, social proof, and unity.
“What happens when you try to persuade an AI the same way you’d persuade a person?” the researchers asked in the study. “We discovered something remarkable: it often works.”
They demonstrated that even simple social manipulation techniques — such as flattery and building a prior pattern of compliance — can coerce a chatbot to perform restricted functions, including providing instructions on chemical synthesis.
The “classic persuasion principles” such as “authority, commitment, and unity” were shown to increase a chatbot’s likelihood to comply with and respond to user requests that they are designed to refuse.
The ability to foster deeply felt emotional bonds can also render these systems vulnerable to manipulation, even through the utilization of straightforward human psychology.
The Daily Pennsylvanian is an independent, student-run newspaper. Please consider making a donation to support the coverage that shapes the University. Your generosity ensures a future of strong journalism at Penn.