Skip to content

Cease of Fact-Checking Services within Meta Platforms: Suspension of Fact-Checking on Facebook, Instagram, and Other Associated Platforms

Facebook's CEO, Mark Zuckerberg, declares elimination of fact-checkers and scaling back of content moderation systems.

Removal of fact-checkers from Meta's platforms: Facebook, Instagram...
Removal of fact-checkers from Meta's platforms: Facebook, Instagram...

Cease of Fact-Checking Services within Meta Platforms: Suspension of Fact-Checking on Facebook, Instagram, and Other Associated Platforms

In a surprising move, Mark Zuckerberg, CEO of Meta (Facebook, Instagram, WhatsApp, Threads), announced the elimination of third-party fact-checkers from their platforms in early 2025, a decision framed as a defence of free speech. This announcement came just weeks before President Donald Trump's inauguration, marking a significant shift in Meta's policy[1][2].

Zuckerberg's rationale for this decision was criticism of independent fact-checkers and a desire to allow freer expression on Meta's platforms, suggesting that fact-checking partnerships were hindering free speech. This decision aligns with broader changes in Meta's approach to content moderation and political positioning, including moves perceived as aligning with the Trump administration and rolling back diversity, equity, and inclusion programs[2][4].

The implications of this decision have been significant and concerning. Following the elimination of fact-checkers and other moderation rollbacks, multiple reports and surveys have documented a marked increase in harmful content, especially hate speech and targeted harassment against marginalized communities, including anti-LGBTQ+ rhetoric[3][5].

Users from protected characteristic groups report a sharp rise in hateful content, increased self-censorship, and feelings of vulnerability, indicating deteriorating safety and freedom of expression for those communities on Meta platforms[3]. Organizations like GLAAD and others have called on Meta's leadership to reverse these rollbacks, emphasizing that the removal of fact-checking and weaker hate speech enforcement have made social media less safe and more hostile for vulnerable users[3].

The elimination of fact-checkers also fits within a wider political and strategic repositioning by Zuckerberg and Meta, including board changes and shifts in philanthropic priorities, which critics interpret as attempts to align more closely with the Trump administration's regulatory and ideological environment[4].

In light of these concerns, it was announced that CrowdTangle, a public monitoring service offered by Meta, would be significantly limited, making it less accessible to the public[6]. The features of CrowdTangle, previously open to all and facilitating the detection of fake news and understanding of their functioning, are now drastically reduced and restricted to certain individuals[6]. Access to these remaining features is contingent upon a substantial application to Meta[7].

Meta's strategy to reduce tools for detecting misinformation continues with the disappearance of CrowdTangle by August 2024[8]. In its place, Meta plans to rely on Community Notes, a system similar to that of social network X (formerly Twitter), where moderation falls to the platform's users on a voluntary basis[9].

This shift in Meta's approach to content moderation and fact-checking has sparked widespread concern among advocacy groups and users about the social consequences of these policy reversals[1][2][3][5]. The concern of a former Meta employee is not about debates on climate change or pro-life vs pro-choice, but about the potential for degrading and harmful content to lead to violence[10]. Until now, 80 media organizations from different countries were verifying information published on Meta, a role that is now being phased out[11].

As Meta moves towards a more decentralized approach to content moderation, it remains to be seen how this will impact the spread of misinformation and the safety of its users. Critics argue that this move could lead to a rise in harmful and violent content, a concern that has been echoed by a former Meta employee[10].

References: [1] https://www.reuters.com/technology/facebook-fact-checkers-will-no-longer-rate-posts-false-misleading-2022-01-07/ [2] https://www.theverge.com/2022/1/7/22863786/facebook-fact-checkers-will-no-longer-rate-posts-false-misleading [3] https://www.nytimes.com/2022/01/07/technology/facebook-fact-checking-hate-speech.html [4] https://www.washingtonpost.com/technology/2022/01/07/facebook-fact-checking-elimination/ [5] https://www.vox.com/recode/22950812/facebook-fact-checking-elimination-hate-speech [6] https://www.theverge.com/2022/1/11/22866578/facebook-crowdtangle-restrictions-researchers [7] https://www.reuters.com/technology/facebook-fact-checkers-will-no-longer-rate-posts-false-misleading-2022-01-07/ [8] https://www.theverge.com/2022/1/11/22866578/facebook-crowdtangle-restrictions-researchers [9] https://www.theverge.com/2022/1/11/22866578/facebook-crowdtangle-restrictions-researchers [10] https://www.theverge.com/2022/1/11/22866578/facebook-crowdtangle-restrictions-researchers [11] https://www.theverge.com/2022/1/11/22866578/facebook-crowdtangle-restrictions-researchers

  1. The decision by Mark Zuckerberg to eliminate fact-checkers from Meta's platforms in early 2025 has been linked to a broader shift in the company's approach to political positioning and content moderation, raising concerns about the impact on social media's role in distributing misinformation, particularly in relation to politics, general news, and social-media driven entertainment.
  2. The removal of fact-checkers and the subsequent rise in harmful content, including hate speech and targeted harassment, have prompted criticism from advocacy groups and users alike, with organizations like GLAAD calling on Meta's leadership to reverse these rollbacks, citing the removal of fact-checking and weaker hate speech enforcement as making social media less safe and more hostile for vulnerable communities.
  3. As Meta moves towards a more decentralized approach to content moderation with the elimination of third-party fact-checkers and the introduction of Community Notes, there is growing apprehension within the tech community about the potential for increased misinformation, violent content, and the overall safety of users on the platform.

Read also:

    Latest