Meta’s decision to pull the plug on its independent fact-checking programme in the US has drawn sharp criticism from various quarters, and ignited a broader debate over the most effective strategy to combat misinformation on social media. In a blog post titled ‘More Speech, Fewer Mistakes’ published on January 7, the company behind Facebook, Instagram, WhatsApp, and Threads announced a series of changes to the way content will be moderated across its apps in the US. The most tangible change in Meta’s content moderation approach is that it will eliminate fact-checks posted by fact-checkers in the US, replacing them with a ‘Community Notes’ system similar to X (formerly Twitter). Why is Meta switching from fact-checks to Community Notes? How will they work? What are the potential drawbacks of Community Notes? Have they been effective in the past? Take a look. What prompted the shift Nine years ago, Meta started flagging fake news with help from outside fact-checkers. Its independent fact-checking programme was expanded following reports that Russian disinformation campaigns had targeted American voters to influence the 2016 US presidential election. So far, fact-checkers and experts certified by International Fact-Checking Network (IFCN) could independently review and rate potential misinformation on Meta’s platforms by citing their original reportage, interviewing primary sources, consulting public data, and conducting analyses of media, including photos and videos. Meta would ensure that every piece of content rated as false by IFCN-certified fact-checkers would be less visible to users. It would also attach a warning label below such content, linking to an article published by the fact-checker. Now, Meta has changed gears and said that this approach is flawed. “Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how,” the company said. However, IFCN has argued that fact-checkers have not been biased in their work. “The fact-checkers used by Meta follow a Code of Principles requiring non-partisanship and transparency,” IFCN director Angie Holan said in a statement. Besides criticising Meta for throwing fact-checkers under the bus, Holan said that its decision “comes in the wake of extreme political pressure from a new administration and its supporters.” US President-elect Donald Trump has been critical of big tech companies for allegedly censoring conservatives online. How Community Notes work Community Notes was first piloted as a programme called ‘Birdwatch’ by Twitter in 2021 i.e. before Elon Musk purchased the platform for $44 billion and rebranded it to X. The crowdsourced fact-checking model allows users to add facts and context below a specific post. A Community Note shows up below a post only if enough contributors vote that the context it provides is helpful. As a result, the model is said to become better as more users participate. Currently, anyone on X can become a contributor and add Community Notes as long as they meet certain criteria such as having a six month-old account, verified phone number, and zero violations of X’s rules. Initially, contributors are only allowed to rate Community Notes as helpful or not. Over time, they are allowed to write and attach their own Community Notes that will be rated by other contributors. All Community Notes contributions on X are publicly available. Anyone can download the data to analyse the trends and flag issues. Meta’s Community Notes model is likely to be similar to that of X. “It will require agreement between people with a range of perspectives to help prevent biased ratings,” the company said. Users in the US were able to sign up to be a contributor via Facebook, Instagram, and Threads starting January 7. Challenges with Community Notes Given its crowdsourced nature, Community Notes could be vulnerable to coordinated manipulation. To address this challenge, X uses a bridging algorithm to determine whether a Note appears below a post. This means that a Note will be shown only if it has been rated as ‘helpful’ by people who have tended to disagree in their past ratings, according to X’s guidelines. X claims that this bridging algorithm “helps to prevent one-sided ratings and to prevent a single group from being able to engage in mass voting to determine what notes are shown.” To ensure diverse viewpoints, X has said that it proactively asks contributors who are likely to provide a different perspective for their input via a ‘Needs Your Help’ tab that appears within a particular Note. Community Notes contributors are also given some protection in the form of auto-generated aliases so that they are not identified and targeted for their contributions. Contributors who write too many Community Notes rated as ‘Not Helpful’ are temporarily locked out in order to prevent the system from being overwhelmed with spammy, low-quality Notes. However, some challenges remain. Community Notes may not be as effective in stopping the spread of misinformation to other platforms. Their ability to capture the nuance that goes into fact-checking political news has also been questioned.