(Agence Ecofin) – In the age of social networks, online content management opposes freedom of expression and control of information. Facing growing criticism of its methods, Meta is reconsidering its policies, beginning a transition that could impact the credibility of its social platforms on a global scale.
On Tuesday, January 7, 2025, Meta, the social media giant led by Mark Zuckerberg, announced that it would end its third-party fact-checking program in the United States. It allowed media or non-governmental organizations to select and verify information published on Facebook or Instagram. A verification message appeared under misleading or false content processed by these partners, who were paid for this task.
In place of this solution, the company will introduce a community notes program, similar to that adopted by X (formerly Twitter) under the leadership of Elon Musk. This program, which will begin in the United States, aims to encourage users to actively participate in the evaluation of content on Facebook, Instagram and Threads.
« What was intended to provide more information to users has turned into a tool of censorship. We have seen excessive vetting that has stifled legitimate policy debate. Our new community program aims to give more power to users, while taking on the potential risks of harmful content “, explained Joel Kaplan, head of international affairs at Meta, and former Republican leader, in a blog post.
This initiative coincides with the return of Donald Trump to the presidency. It comes after tensions during his previous term, where Trump criticized Meta for policies perceived as hostile to conservative voices, particularly after his temporary ban in 2021. During the press briefing held on Tuesday, Donald Trump welcomed the news Meta content moderation policy. Asked about a possible link between this change of heart and his previous threats against Zuckerberg, to whom he promised life in prison in the event of illegality, the future head of state bluntly declared “ probably, yes ».
The challenges for Africa
While this new policy only concerns the United States for now, it raises questions about the future of content moderation in other regions, notably Africa. Social networks play a central role in access to information, but they also constitute fertile ground for the propagation of false news, hate speech and even illicit content.
According to a study conducted in June 2024 by the Australian company KnowBe4, 84% of Africans use social networks as their main source of information, and 80% of them favor Facebook. This dependence on social platforms facilitates access to information, but also contributes to the polarization of opinions and the spread of manipulated content, which represents a danger for social cohesion and economic investment in fragile contexts.
A notable example dates back to 2018, when a fake Twitter account attributed a message to Nigerian opposition leader Atiku Abubakar, then a presidential candidate, thanking the Association of Nigerian Gay Men for its support and promising to defend their cause if he were elected. This false information sparked strong indignation and weakened the politician’s position, particularly in the Muslim-majority north of Nigeria.
Real consequences on stability
In a note published in April 2024, the Africa Center for Strategic Studies indicated that disinformation campaigns aimed at manipulating African information systems have almost quadrupled since 2022, taking advantage of the rapid expansion of reach and accessibility of digital communications. 189 documented disinformation campaigns in Africa have been identified, but the figure is considered underestimated given the opaque nature of disinformation.
The Africa Center for Strategic Studies argues that “ the scale of disinformation is closely linked to instability. Disinformation campaigns have directly caused deadly violence, encouraged and validated military coups, silenced members of civil society, and served as a front for corruption and exploitation. This has had concrete consequences on the rights, freedoms and security of Africans ».
Through its collaboration with various African public and private organizations, Meta is one of the companies at the forefront of the fight against disinformation on the continent. This is a significant asset in several countries where government actions against fake news remain weak. In his study “ The fight against disinformation in French-speaking public policies comparative overview » published in 2022, the International Organization of the Francophonie (OIF) already identified the non-existence of legislative and regulatory texts dedicated to the fight against disinformation as one of the causes of its proliferation. Although states criminalize the propagation of false news in their communications law, they still struggle to effectively protect themselves against it on the new media such as social networks.
The OIF maintained that “ this is particularly the case in the majority of sub-Saharan African countries, where only Mauritania has a law specifically dedicated to the fight against disinformation. Consequently, many countries do not have institutions specialized in the fight against disinformation. Regarding media regulatory bodies, many of them do not have the legal competence to regulate social networks, spaces par excellence for the propagation of disinformation in French-speaking sub-Saharan Africa. ».
Meta’s new content moderation policy – which relies on users themselves to rate and report questionable content – is criticized and considered controversial, as doubt remains over the relevance and credibility of the users who will give the ratings . This decision by Meta, although currently limited to the United States, highlights the urgency for African governments to develop policies adapted to the management of online content. On a continent where the regulation of social networks remains insufficient, this could become a key lever to counter disinformation and guarantee a healthier digital environment.
Samira Njoya