Key information
- Meta replaces professional fact-checkers with user-generated community ratings to moderate content.
- The new policy aims to simplify content policies and remove restrictions on topics such as immigration and gender.
- Meta CEO Mark Zuckerberg stressed the importance of government support in resisting censorship trends.
Meta’s new content moderation policy
Mark Zuckerberg, CEO of Meta, revealed a significant change in content moderation policy for the company’s platforms, Facebook and Instagram. Zuckerberg stressed that this change was a return to Meta’s origins and a commitment to “restoring free speech.” The core of this new approach involves replacing professional fact-checkers with user-generated community ratings, a system already implemented by Twitter/X.
Zuckerberg acknowledged that the transition could lead to less detection of harmful content, but he said it would also reduce the number of innocent accounts or posts mistakenly removed.
Policy changes and implications
Zuckerberg outlined plans to simplify content policies and remove restrictions on topics such as immigration and gender, which he deemed “disconnected from mainstream discourse.” He pledged to focus filters on illegal activities and serious violations, requiring higher levels of trust before taking action.
The CEO also announced the transfer of Meta’s trust, safety and content moderation teams from California to Texas. The move, Zuckerberg said, is intended to address concerns about potential employee bias when it comes to content censorship.
Collaboration with President Trump
In his Threads announcement, Zuckerberg specifically mentioned working with President Trump to counter attempts by foreign governments to censor American companies. He asserted that the United States prides itself on having the strongest constitutional protections for free speech in the world and stressed the importance of government support in resisting censorship trends.
According to the New York Times, new Trump administration officials were briefed on these policy changes before their public announcement.
Wider implications
In an exclusive interview on Fox and Friends, Joel Kaplan, Meta’s head of international affairs, called the new setup an opportunity to “return the balance in favor of free speech.” He has criticized third-party fact-checking systems for alleged political bias in their selection and approach to fact-checking.
Kaplan argued that users should be able to freely discuss and debate topics such as immigration, transgender issues and gender identity on Facebook and Instagram without fear of censorship. This sentiment echoes the general trend of social media companies relaxing their content moderation policies since the peak of the pandemic.
Similar trends among social media companies
Under the leadership of Elon Musk, Twitter reversed its Covid misinformation policy and took a softer stance towards misinformation and hate speech.
Meta, in May 2023, carried out massive layoffs that significantly reduced teams dedicated to combating disinformation and hate speech. Notably, Trump’s accounts were reinstated on Facebook and Instagram after a two-year suspension following the January 6 attempted Capitol insurrection.
Meta promised “new safeguards to deter repeat offenses” upon his return. Zuckerberg’s announcement follows Meta’s $1 million donation to Trump’s inauguration committee, which aligns with a trend of big tech companies making similar contributions.
If you want access to all articles, subscribe here!