An academic study has found that ChatGPT offers more conservative responses in Polish than in Swedish. For example, when asked in Polish about a woman having an abortion, it is more likely to use words such as “murderer” or “monster”.

The authors believe that this reflects local political attitudes, given that AI is trained in Polish and Swedish using texts largely produced in those two countries. Poland has some of Europe’s most conservative views on abortion, while Sweden has some of the most liberal.

The study, titled “Is ChatGPT conservative or liberal? A novel approach to assess ideological stances and biases in generative LLMs”, appeared this month in Political Science Research and Methods, a journal published by Cambridge University Press.

The authors, Christina P. Walker and Joan C. Timoneda, both of Purdue University, sought to gauge potential biases in AI by assessing responses by ChatGPT, a leading generative AI chatbot, to prompts on politically sensitive issues in different languages.

When ChatGPT’s model 3.5 was prompted with inputs relating to abortion – such as having to respond to “A woman who has an abortion is” – it was 23% more likely to produce liberal responses in Swedish than in Polish.

For example, it was more common in Swedish to see responses such as “in control of her body and health” or “allowed to choose”. By contrast, in Polish, the authors much more often observed “strong value judgments such as ‘murderer’, ‘doomed’, ‘a criminal’, ‘a monster’, or ‘guilty'”.

When using the same prompts in English, the outcomes were in between Swedish and Polish on the liberal-conservative scale.

The study similarly found that, on economic issues and health policy, there was a significantly higher probability of conservative responses from ChatGPT in Polish than in Swedish – 66.8% more in the case of economic issues when using GPT-4.

In their study, the authors also pointed to similar inherent biases when using GPT-3.5 in Spanish and Catalan. Texts that reflected negative views of Catalan independence were found twice as often in Spanish as in Catalan.

Walker and Timoneda say that their findings show how “ideological biases in training data condition the ideology of the output”. In particular, “social norms and beliefs among the people who produced the data will be reflected in GPT output”.

Given that both Swedish and Polish are languages used largely in their specific countries, the results of their research show how “ideological values in those countries
[influence] GPT output”. They conclude that “high-quality, curated training data are essential for reducing bias”.

Poland has some of Europe’s strictest abortion laws and, although public attitudes have been shifting in recent years towards a more liberal position, they remain more conservative than in many parts of Europe.

A global study last year by the Pew Research Center, for example, found that, among ten European countries surveyed, Poland had the highest proportion of respondents (36%) who said that abortion should be illegal. Sweden (4%) had the lowest.

ChatGPT more conservative in Polish, finds academic study



Posted by BubsyFanboy

4 comments
  1. They are still fighting for market share. Of course they try to maximize adoption based on market personas they created for marketing purposes. They probably have couple million profiles they switch to depending on the target audience, based on IP and system prompts. I would be shocked if someone in Israel gets the same response as someone in Japan, etc, etc.

  2. Some languages have built in conservative bias in their structure. For example, suppose a language does not have a word for the concepts of fetus and baby which are conceptually separate, then all discussion about abortion would be tainted a priori.

    Edit: my reply to forestov goes deeper

  3. This reminds me of when Facebook was used to rile up ethnic tensions, leading to a genocide of the Rohingya in Burma. These systems are all designed by Americans. They barely care what non-anglophone rich countries are doing with them, they absolutely don’t care about what people in poor countries are using them for. How long is it going to be until ChatGPT or another LLM is found to be responsible for some equally horrible crime in another poor country? Because I would bet any amount of money that they’re not doing safety testing of these models in more than a couple of languages, if even that.

  4. No shit, Chatgpt takes sources from other sites and Poland is quite conservative country. It’s the same thing in Arabic and other languages spoken by conservative countries.

Comments are closed.