Meta abandons professional fact-checking in the United States for a community model. A revolution or a risk for online information?

Since its launch, Meta (formerly Facebook) has been at the heart of controversies related to content moderation and fact-checking. But in early 2025, Mark Zuckerberg announced a radical change: the end of professional fact-checking on Facebook and Instagram in the United States. This shift, inspired by the community model of X (ex-Twitter), raises many questions about the future of online moderation and freedom of expression. So, is this a real revolution or a simple capitulation to political pressure?

Meta's choice: a historic turning point

For more than a decade, Meta had been working with journalists and independent organizations to fact-check information circulating on its platforms. These initiatives were aimed at combating fake news and hateful content. However, according to Zuckerberg, this system had become ineffective and too politicized. He said:

"Verifiers have done more to reduce trust than to improve it."

Now, Meta is betting on a system of "community notes" similar to that of Elon Musk on X. Users will be able to comment, contextualize and correct information through a community validation process. This method, while innovative, raises questions about its effectiveness and its limits.

The end of professional fact-checking: why now?

The timing of this decision is not insignificant. Donald Trump's re-election as President of the United States has led to a shift in the balance of power between big tech companies and the political establishment. Under pressure from the Trump administration and in the face of recurring criticism from Elon Musk, Zuckerberg appears to have opted for a strategic alignment.

In addition, Joel Kaplan, a close friend of Donald Trump, has been promoted to a key position at Meta. His recent statement illustrates this change of direction well:

“Too much harmless content has been censored. It’s time to give users a voice again.”

This approach also appears to address growing concerns about “censorship” and bias in traditional media. But by abandoning professional fact-checking, does Meta risk losing control over the quality of the information it disseminates?

The limits of the community model

The community rating system has theoretical advantages. It encourages diversity of perspectives and relies on collective intelligence to ferret out misinformation. However, several problems remain:

1. Majority biases

Decisions made by a community may reflect cultural, political, or social biases. This can lead to the removal of legitimate but unpopular content.

2. The risk of manipulation

Organized groups could exploit this system to distort or skew information. For example, disinformation campaigns could be validated if they garner sufficient support.

3. Lack of expertise

Unlike journalists and specialists, ordinary users do not always have the skills necessary to assess the veracity of complex information.

A global impact but limited for the moment

While these changes are currently limited to the United States, they could herald a global shift in Meta policies. However, in Europe, the Digital Services Act (DSA) rules impose strict standards for moderation. Clara Chappaz, France’s minister delegate for AI and digital technology, said that France will ensure that these practices are not adopted on European territory.

A double-edged freedom of expression

By presenting these new rules as a defense of free speech, Meta seems to want to win the favor of conservatives. But this freedom could come at a high cost:

  • Proliferation of fake news : Without rigorous verification, misleading information could multiply.
  • Increase in hate speech : Meta's new moderation policies ease restrictions on some content, raising concerns among observers.
  • Loss of credibility : By neglecting experts, Meta risks losing the trust of users who expect reliable platforms.

 

Meta's decision to abandon professional fact-checking brand a major breakthrough in online content moderation. While this community-based approach reflects an ambitious vision for freedom of expression, it also carries significant risks. As eyes turn to the results of this experiment in the United States, one question remains: Can Meta truly balance freedom and responsibility?