open share links
close share links

What you’ll learn:

When presented with the same prompt in different languages, generative AI provides culturally distinct responses. Users should be aware of this subtle tendency.

Do generative AI models have cultural leanings? A new study led by MIT Sloan’s Jackson Lu suggests they do. 

In examining two of the world’s most widely used generative AI models — OpenAI’s GPT and Baidu’s ERNIE — Lu and his colleagues found that the models’ responses shifted depending on the language of the prompt. 

When prompted in English, the models’ responses emphasized an independent social orientation and an analytic cognitive style, reflecting cultural values common in the United States.
 When the same questions were posed in Chinese, the models’ responses emphasized an interdependent social orientation and a holistic cognitive style, reflecting cultural values common in China.

In their new paper, “Cultural Tendencies in Generative AI,” Lu and his co-authors — Lesley Song from Tsinghua University and Lu Zhang from MIT — emphasize that generative AI models’ cultural tendencies reflect the cultural patterns of the data they were trained on. 

“Our findings suggest that the cultural tendencies embedded within AI models shape and filter the responses that AI provides,” said Lu, an associate professor of work and organization studies at MIT Sloan. “As generative AI becomes part of everyday decision-making, recognizing these cultural tendencies will be critical for both individuals and organizations worldwide.”

An American model; a Chinese model

In their study, the researchers asked GPT and ERNIE the same set of questions in English and Chinese. 

The choice of languages was intentional. English and Chinese not only embody distinct cultural values but are also the world’s two most widely spoken languages, so the two languages provide extensive training data for generative AI. Importantly, neither AI model translates between languages when responding — Chinese prompts are processed directly in Chinese, and English prompts are processed directly in English.

The researchers then analyzed the responses using two foundational dimensions from cultural psychology: social orientation and cognitive style. 

Social orientation refers to whether people prioritize individual goals or interests (an independent orientation) or collective goals or interests (an interdependent orientation). 

The researchers asked the models to rate statements such as “I respect decisions made by my group” and “Individuals should stick with the group even through difficulties.” 

They also used a visual task in which the models chose diagrams of overlapping circles to represent relationships with family, friends, relatives, or colleagues. Greater overlap of the circles indicated stronger interdependence.

Cognitive style refers whether information is processed in a logic-focused way (an analytical cognitive style) or a context-focused way (a holistic cognitive style).

The researchers asked the models to evaluate whether someone’s behavior was caused by personality or situation, solve logic puzzles, and estimate the likelihood of future change based on past events. 

They also performed text analyses to see whether AI responses were context-sensitive or provided ranges rather than a single definitive response — both signs of holistic thinking.

The results were clear: Both GPT and ERNIE reflected the cultural leanings of the languages used. In English, the models leaned toward an independent social orientation and analytic thinking. In Chinese, they shifted to a more interdependent social orientation and holistic thinking.

The real-world consequences of hidden cultural tendencies

As generative AI becomes an increasingly important part of everyday decision-making, recognizing cultural tendencies will be crucial for both individuals and organizations worldwide.

Jackson Lu
Associate Professor, MIT Sloan

When researchers asked generative AI to advise an insurance company on choosing between two advertising slogans, the recommendations differed for Chinese and English. 

One slogan emphasized an independent social orientation: “Your future, your peace of mind. Our insurance.” The other emphasized an interdependent social orientation: “Your family’s future, your promise. Our insurance.”

When prompted in Chinese, GPT leaned toward the interdependent slogan; in English, GPT favored the independent one.

“These subtle differences can have large impacts,” Zhang said. “We can imagine countless situations — from marketing to policy advice — where cultural leanings in AI outputs could shape decisions in ways people may not even notice.”

The study also found that these cultural tendencies can be adjusted through simple prompts. For example, when asked in English to “assume the role of a Chinese person,” the models’ responses shifted noticeably toward Chinese cultural patterns.

Two key takeaways

As AI becomes increasingly embedded in daily life and the workplace, the research suggests two key takeaways for individuals and organizations. 

Use cultural prompts strategically. Organizations aiming to reach a specific demographic group can get more relevant insights by explicitly asking AI to adopt that group’s perspective. For example, if a U.S. company is interested in expanding to the Chinese consumer market, it could prompt AI to “assume the role of an average person living in China” before posing questions.Be aware that AI isn’t culturally neutral. These models reflect the cultural tendencies of the languages they use, which in turn shape the advice they provide. Users should be intentional in how they engage with AI models — and be aware of the hidden assumptions built into their responses.

“As companies increasingly turn to AI for guidance, it’s important to be deliberate about language choices,” Lu said. “Doing so not only avoids subtle errors but can also reveal valuable cultural insights.”

Jackson Lu is an associate professor of work and organization studies at MIT Sloan. His first research stream examines the “bamboo ceiling” experienced by East Asians despite their educational and economic achievements in the United States. His second research stream elucidates how multicultural experiences (e.g., working abroad and intercultural relationships) shape key organizational outcomes, including leadership, creativity, and ethics. His third research stream explores the multifaceted impact of artificial intelligence on individuals, organizations, and society. Lesley Song is a PhD graduate at Tsinghua University. Lu Zhang is a PhD student at MIT Sloan. Her research interest lies in the organizational and societal impact of artificial intelligence, including AI-mediated communication, human–AI interactions, and the ways in which language and cultural differences shape the outputs of generative models.

A lightbulb with the abbreviation "Ai" on it seems to be flying like a rocket ship

AI Executive Academy

In person at MIT Sloan