Premium

Why is Meta replacing fact-checks with Community Notes in the US? Will it be more effective?

Why is Meta dropping its fact-checking programme in the US? How will these changes affect the way content is moderated on its platforms?

Mark ZuckerbergMark Zuckerberg at the company's flagship Connect event. (Express Photo)

In a major shake-up of its content moderation strategy, Meta has announced that it will be pulling the plug on its third-party fact-checking programme in the US. Instead, the social media giant said it will be embracing the Community Notes system followed on Elon Musk-owned platform X.

Meta admitted that its content moderation efforts had “gone too far” to the point where “we are making too many mistakes”.

“Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do. We want to fix that and return to that fundamental commitment to free expression,” Joel Kaplan, the newly hired head of Meta’s global policy team, said in a blog post published on Tuesday, January 7.

Story continues below this ad

“The recent elections also feel like a cultural tipping point to once again prioritising speech,” CEO Mark Zuckerberg said in a video posted on its website.

Meta’s change in approach comes as a new regulatory regime takes shape under incoming US President Donald Trump who has been critical of big tech companies for allegedly censoring the online speech of conservatives in the country.

How will Community Notes work on Meta’s platforms?

On X, select users add helpful notes with facts and context below a specific post. It is primarily intended to prevent the spread of misinformation.

Anyone on X who meets certain criteria can become a contributor and add Community Notes. Initially, contributors are only allowed to rate Community Notes. Over time, they are allowed to write and attach their own Community Notes which will also be rated by other contributors. However, the feature is not without its challenges.

Story continues below this ad

Yet, Meta has said that it is adopting X’s crowdsourced model to curb misinformation because it works. “They empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see,” it said.

Community Notes on Meta will be written and rated by contributing users, similar to X. “It will require agreement between people with a range of perspectives to help prevent biased ratings,” the company said.

Users on Facebook, Instagram, and Threads can sign up to be a contributor starting today. The Community Notes on Meta’s platforms will appear as a “much-less obtrusive label indicating that there is additional information for those who want to see it.”

Meta plans on gradually rolling out Community Notes in the US over the next few months. It did not mention if the changes will be extended to other countries as well.

Story continues below this ad

What was the previous approach? Why is Meta moving away from it?

In 2016, Meta launched its independent fact-checking programme. Fact-checkers and experts that have been certified by the non-partisan International Fact-Checking Network (IFCN) could independently review and rate potential misinformation by citing their original reports, interviewing primary sources, consulting public data and conducting analyses of media, including photos and video.

They did not have the ability to remove content on their own. Instead, Meta would ensure that every piece of content rated as false by IFCN-certified fact-checkers would be downranked so that fewer people saw it.

“That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” the company said.

Meta further revealed that in December last year, 10-20 per cent of actions taken against content may have been errors, i.e. the actioned content may not have violated the platform’s policies.

Story continues below this ad

This is far from the first time that Meta has admitted to mistakenly removing content across its apps.

The company issued a public apology after its automated content moderation systems downranked photos of the assassination attempt on President-elect Trump. Meta’s Oversight Board had also warned against the “excessive removal of political speech” in the run-up to the US presidential elections in November last year.

Shortly after Trump’s electoral victory, Zuckerberg was seen dining with the president-elect at his Mar-a-Lago resort as the former moved to restore the once-fraught relations between them.

The Meta boss has tapped Joel Kaplan to lead the company’s new global policy team after Nick Clegg stepped down. Kaplan is known as a prominent Republican and has served as the deputy chief of staff at the White House under US President George W Bush.

Story continues below this ad

A few days ago, Zuckerberg appointed Dana White, a close Trump ally and the face of the Ultimate Fighting Championship (UFC), to the tech giant’s board of directors. Zuckerberg also donated $1 million to Trump’s inaugural fund in December last year.

Which other policies has Meta decided to discontinue?

Meta has said that it will no longer demote fact-checked content.

Earlier, users saw a warning label below a piece of content that had been flagged by its third-party fact-checkers as misleading. The label also linked to an article by the fact-checker. Now, the company is dumping these warning labels as well.

Meta is also doing away with its previous restrictions on topics like immigration, gender identity and gender. These policy changes may take a few weeks to be fully implemented, it said.

Story continues below this ad

Furthermore, the company announced that it will be tuning its AI content moderation tools designed to scan and flag content that violates the platform’s policies.

“We’re going to tune our systems to require a much higher degree of confidence before a piece of content is taken down,” Meta said. Content that violates its “less severe” policies will only face action if it is reported by a user, rather than being automatically detected.

Notably, Meta revealed that it is using large language models (LLMs) to provide a second opinion before taking actions against content.

“We are also going to recommend more political content based on personalised signals and are expanding the options people have to control how much of this content they see,” the company said.

Story continues below this ad

“As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations,” it added.

Technology on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.on smartphone reviews, in-depth reports on privacy and security, AI, and more. We aim to simplify the most complex developments and make them succinct and accessible for tech enthusiasts and all readers. Stay updated with our daily news stories, monthly gadget roundups, and special reports and features that explore the vast possibilities of AI, consumer tech, quantum computing, etc.

Latest Comment
Post Comment
Read Comments