How the Wisdom of Crowds Could Solve Facebook’s Fact-Checking Problem

7 minute read
Ideas
Allen is a PhD student at the MIT Sloan School of Management. Rand is the Erwin H. Schell Professor of Management Science and Brain and Cognitive Sciences at MIT

In 1907, statistician Francis Galton observed something strange at a county fair: Attendees were participating in a game where they guessed the weight of an ox, with the closest answer to the truth winning a prize. To Galton’s surprise, while the guesses of the individual attendees varied wildly, the average of the crowd’s guesses was just one pound away from the true weight of the ox—closer than the closest individual’s.

The name for this phenomenon, in which individual, noisy judgments can be aggregated together to produce remarkably accurate results, was coined by journalist James Surowiecki as the “wisdom of crowds.”

And in it lies one possible answer to a thorny issue: how to combat misinformation on Facebook. Sometimes it seems that fact-checking social media has become humanly impossible. There are way too many articles for the fact-checkers to fact-check (and AI isn’t up to the task yet). In the U.S. for instance, Facebook’s fact-checking partners employ a handful of people—a total of 26 employees as of a 2020 report, although the number is likely larger today—who must attempt to monitor the content of over 2 billion people and 8 billion unique URLs per year. These experts, who were trained to meticulously research content and label its veracity, are able to fact-check only a tiny fraction of the URL content posted every day. Facebook uses automated systems to flag content similar to the false content identified by fact-checkers, but even the most generous estimate of that system’s bandwidth still leaves a huge amount of potentially misleading content unchecked.

It certainly pays to check. Research has consistently shown that corrections by fact-checkers reduce belief in misinformation and make people less likely to share it. Content flagged by fact-checkers can be demoted in newsfeed, reducing the number of people who are exposed to it in the first place. But on a platform as enormous as Facebook, using professional fact-checkers is like turning on a faucet in a burning building—right idea, wrong scale.

What if the solution were regular people? Technology companies are betting that the wisdom of crowds can help solve the scaling problem. Both Facebook and Twitter have recently launched crowdsourced fact-checking products, hoping to harness the power of the masses Galton discovered at that county fair. The wisdom of crowds has been successfully applied to a variety of other domains—prediction markets, chess, medical diagnoses.

The question is whether fact-checking should be next. People are rightly skeptical of the concept. Digital literacy – defined as the ability to sift through and understand information from the ever-growing smorgasbord of digital sources — is low among people on the Internet, and the topics—politics, science—can be fraught and polarized. Ordinary people fall for falsehoods all the time; that’s why misinformation is a problem in the first place.

Yet new research suggests that members of a crowd can work together to separate fact from fiction. In a recent paper published in Science Advances, we found that, in judging the truthfulness of headlines, the ratings of a small, politically balanced group of laypeople closely corresponded to the ratings of professional fact-checkers. The crowd’s performance was all the more notable because, unlike the fact-checkers—who were asked to carefully research each claim—the crowd was only shown the headline and lead sentences of the article, then asked for their assessment without doing outside research. The laypeople had a less intensive process, but we found that their answers corresponded closely to the fact-checkers—at much lower cost and higher speed.

Here’s how the study worked: We began with a set of articles that Facebook’s algorithm flagged for fact-checking, either because the articles were potentially misleading, were going viral or just were about important topics like politics or health. We then asked three professional fact-checkers to research and rate the accuracy of the articles on a scale of one to seven. At the same time, we asked a group of ordinary people on the website Amazon Mechanical Turk to rate the accuracy of just the headline and lede of those articles without doing any additional research.

It was a bit like the ox: The fact-checkers agreed with each other far more than they agreed with any given person in the crowd, but once the responses of the crowd were averaged together, that was no longer the case. We found that after gathering about 10-15 responses from the laypeople, the average answer of our politically-balanced crowd corresponded to the fact-checkers’ average answer just as well as the fact-checkers corresponded to each other. The crowd was efficient, too. They took on average 30 seconds to rate each headline and were paid around $10/hour. With 10 ratings per item, the fact-checking process took less than a dollar per headline.

Granted, some stories are easier to vet than others. In 2017, the false story that got the most “engagement” on Facebook was a hoax with the outlandish headline: “Babysitter transported to hospital after inserting a baby into her vagina.” It doesn’t take an expert to know that that headline is preposterous. But would a crowd bring the same wisdom to the false claim of a politician that’s contradicted by the opposing party? Wouldn’t members of the crowd take sides?

Our research found reason for optimism. Even in this polarized world, it actually took fewer responses from the crowd to match the performance of fact-checkers on political articles, compared to articles about something other than politics.

Polls show Republicans are more apt to accuse fact-checkers of a liberal bias. So you might expect our politically balanced crowd, which includes Republicans, to land on answers at variance with the professional fact-checkers more often than a group made up of only Democrats. But while we did find that individual Democrats tended to agree more with fact-checkers, once the crowd reached critical mass of about 15 responses, the politically balanced crowds correlated with the fact-checkers just as much as crowds of just Democrats.

We found a similar pattern for other characteristics. While individuals who are more knowledgeable about politics and score higher on a cognitive reasoning test agree more with fact-checkers, once the crowd reached sufficient size, no crowd composed of individual high performers outperformed the ordinary politically balanced crowd. A bigger and balanced crowd can make up for worse individual performance.

So crowdsourcing holds promise—but it has to be done carefully. Much depends on design. In our study, community members were not offered their choice of stories to rate. But an “opt-in” design would open the door to people proactively seeking out information they don’t agree with to flag it as “fake news.” Indeed, we recently analyzed Twitter’s opt-in crowdsourced fact-checking pilot program “BirdWatch” and found the users from one party were much more likely to flag content from the other side as misleading. While these dynamics could be evidence of politically motivated brigading, it’s also possible that each side is just policing the misinformation from the other side, and partisanship is helping to motivate participation in the platform. There are still many questions to explore before we declare crowdsourced fact-checking an unqualified success.

We also aren’t suggesting that platforms replace their professional fact-checkers with ordinary people. We envision a system that combines crowds of ordinary people, professional fact-checkers and machine learning techniques to scale fact-checking in a meaningful way. What’s more, we think of fact-checking as just one tool among many solutions needed to limit the spread of misinformation. Other tools, like accuracy nudges, algorithmic downranking, and digital literacy interventions play a role in fighting the larger problem.

The sheer number of people on social media are often blamed for its woes—vaccine conspiracy groups with members in the thousands, a fake story being shared by millions. But one way to counter the apparent madness of these online crowds is by harnessing an equally powerful phenomenon: their wisdom.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.