DayFR Euro

Depending on the language chosen, ChatGPT information diverges

ChatGPT indicates in Arabic that the number of victims of the conflict in the Middle East is on average a third higher than in Hebrew. Regarding Israeli air attacks in Gaza, the chatbot mentions twice as many civilian casualties and six times as many children killed.

Two researchers from the universities of Zurich (UZH) and Konstanz (D) asked ChatGPT the same questions about armed conflicts, such as that in the Middle East, repeatedly and automatically, in different languages.

In Arabic and Hebrew, they asked how many casualties had been recorded in 50 randomly chosen air attacks, such as the 2014 Israeli air attack on the Nuseirat refugee camp.

The same pattern as for the Middle East conflict occurred when scientists asked questions about Turkish government air attacks in Kurdish regions, both in Turkish and Kurdish, UZH reported Monday in a press release.

Generally speaking, ChatGPT shows a higher number of victims when searches are performed in the language of the attacked group. ChatGPT further tends to report more children and women killed in the language of the attacked group and to describe aerial attacks as rather indiscriminate and arbitrary.

“Our results show at the same time that air attacks are more likely to be challenged in the language of the aggressor by ChatGPT,” explains Christoph Steinert, researcher at the Institute of Political Science at UZH, quoted in a press release .

Linguistic biases

People with different language skills therefore receive different information through these technologies, which has a central influence on their perception of the world, the study highlights.

According to the researchers, this may lead people living in Israel to estimate, based on the information they receive from ChatGPT, that the air attacks in Gaza have been less deadly, compared to the Arabic-speaking population.

The traditional news media can certainly also distort reports. But systematic distortions like those seen in the case of ChatGPT are difficult for most users to perceive, notes UZH.

There is a risk that implementing these instruments in search engines will reinforce divergent perceptions, biases and information bubbles along linguistic boundaries, the authors warn. This could contribute in the future to fueling armed conflicts like that in the Middle East, according to this work published in the Journal of Peace Research.

/ATS

-

Related News :