Meta, the parent company of Facebook and Instagram, has announced plans to overhaul its content review policies by removing third-party fact-checkers and introducing user-generated community notes to moderate misinformation. The move comes amid criticism from U.S. President-elect Donald Trump and his Republican allies, who have long accused the company of censoring right-wing voices through its fact-checking initiatives.
Meta, which boasts over three billion users globally, defended the policy shift as a way to promote transparency and community engagement. However, critics argue that the change may exacerbate the spread of misinformation on its platforms.
Namibian fact-checking expert Frederico Links weighed in on the implications of Meta’s decision, highlighting concerns raised during last year’s Africa Fact-Checking Summit.
“The issue did come up at the Africa Fact-Checking Summit last year, with some saying it’s not going to happen and others saying it’s going to happen. But I think it was clear they were going to pull the plug on third-party fact-checking partnerships—and now it’s happened,” Links said.
He described the move as “bad news” and warned of the negative impact on the quality of information shared on social media.
“There’s a big problem with the type of information people are receiving via social media. Social media is a big problem, and with this now, I think it’ll only get worse. The mis- and disinformation is not going away, but the fact-checking is, so it’s just going to get worse, unfortunately.”
Audio Player
The removal of independent fact-checkers raises questions about the effectiveness of community notes in addressing misinformation, particularly on sensitive topics. Critics fear relying on user-generated content could lead to inconsistent moderation and further polarize online discourse.
As Meta’s changes roll out, experts and policymakers will closely monitor their impact on global information ecosystems.