Facebook severely limited the visibility of Palestinian media following the attacks of October 7, 2023, reveals a BBC investigation published Wednesday December 18. Instagram, another property of the Meta group, also tightened moderation of comments from Palestinian accounts, the company admitted, in the name of combating hate speech.
BBC News Arabic journalists compiled data on the visibility of Palestinian, Israeli and Arabic-speaking media in the years preceding and following the October 7 attack. This so-called “engagement” data – a metric that reflects the impact and visibility of an account on social networks – included reactions, comments and shares. Figures compiled by the BBC show a very clear loss in engagement by Palestinian media since October 7, 2023, even though other media covering the conflict, both Israeli and non-Palestinian Arabic-speaking, have seen theirs increase.
Thus, according to the BBC, the Facebook pages of twenty Palestinian media outlets operating from the Gaza Strip or the West Bank have lost 77% in public engagement since October 7. Views of Palestine TV’s Facebook account would have fallen by 60%, despite its 5.8 million subscribers. On the contrary, the twenty Israeli media outlets analyzed saw their engagement increase by 37%. This increase, driven by coverage of the conflict, would be found on the pages of thirty of the Arabic-speaking media analyzed, such as Al-Jazeera and Sky News Arabia, whose engagement has almost doubled.
Measures to “respond to a spike in hateful content”
Proof that this difference in visibility is not explained solely by Facebook’s difficulties in moderating content in the Arabic language. The “Facebook files”, internal documents which leaked in 2021, thus attested to the insufficient staffing of its Arabic-speaking moderation teams, and to malfunctions in its algorithms responsible for recognizing content criticizing terrorism. They were wrong more than three times out of four, leading to the removal of legitimate content.
In response to this data, Meta recalled that this development was linked to changes in the moderation policy implemented in October 2023. These aimed in particular at the removal of content linked to Hamas, classified as a terrorist organization by the United States and member of the list of dangerous organizations and individuals from Facebook. The alignment of this list with US foreign policy decisions, potentially introducing moderation bias in conflicts abroad, has been criticized by former Facebook employees.
A Meta employee, on condition of anonymity, explained to the BBC that Instagram’s algorithm had also been changed in the week following October 7, in order to toughen the moderation of comments written by Palestinian accounts. Meta confirmed the existence of this change, believing that it was necessary to respond to what she called a “peak of hateful content”. She added that Instagram’s moderation policies changed at the start of the dispute had returned to normal, without specifying whether the same was true of changes to Facebook’s algorithm – when asked about this, Meta was not not returned to The World at the time of publication of this article.
This is not the first time that Meta’s moderation policy has been accused of being biased on the Middle East conflict. In December 2023, the organization Human Rights Watch (HRW) denounced censorship “systemic” pro-Palestinian posts on Meta platforms. The debate also predates October 7. HRW had already accused Meta of censorship in 2021: Meta then committed to changing its moderation policies. A promise which according to the organization has not been kept.
Related News :