
Meta to terminate fact-checking in US from Monday
What's the story
Meta, the parent company of Facebook, Instagram, and Threads, has announced the end of its fact-checking operations in the US from Monday.
Joel Kaplan, Meta's Chief Global Affairs Officer, confirmed the policy shift.
The decision was first revealed in January when Meta also relaxed its content moderation policies.
The move came just ahead of President Donald Trump's inauguration and Mark Zuckerberg's $1 million donation to his inauguration fund.
Cultural shift
Zuckerberg's vision for prioritizing speech
In a video announcing the moderation changes, Zuckerberg said that "the recent elections also feel like a cultural tipping point toward once again prioritizing speech."
However, this shift has raised concerns about its potential impact on marginalized communities.
Meta's hateful conduct policy now allows allegations of mental illness or abnormality based on gender or sexual orientation.
This comes amid ongoing political and religious discussions about transgenderism and homosexuality.
User-driven moderation
New community-based approach to content moderation
Meta's new take on fact-checking is inspired by Community Notes on Elon Musk's X.
This system places some burden of content moderation on users, instead of just relying on paid professionals.
Kaplan announced that "the first Community Notes will start appearing gradually across Facebook, Threads & Instagram, with no penalties attached."
This user-driven approach can offer important context to misleading/controversial posts, but works best with other content moderation tools.
Content impact
Increased visibility of controversial content on Meta platforms
As Meta scales back its fact-checking efforts, the spread of false content has noticeably increased. One Facebook page manager, who spread a fake claim about ICE offering $750 for tips on undocumented immigrants, welcomed the end of the fact-checking program.