If you’re looking for solid information on COVID-19, the Internet is not always your best bet—equal parts encyclopedia and junkyard, solid science on the one hand and rubbish, rumors and fabulism on the other. Distinguishing between the two is not always easy, and with so much of the time we spend online devoted either to sharing links or reading ones that have been shared with us, not only does the junk get believed, it also gets widely disseminated, creating a ripple effect of falsehoods that can misinform people and even endanger lives.
“At its worst, misinformation of this sort may cause people to turn to ineffective (and potentially harmful) remedies,” write the authors of a new paper in Psychological Science, “as well as to overreact (hoarding goods) or, more dangerously, to underreact (engaging in risky behavior and inadvertently spreading the virus).”
It’s well-nigh impossible to keep the Internet entirely free of such trash, but in theory it ought not be quite as hard to confine it to the fever swamps where it originates and prevent it from spreading. The new study explores not only why people believe Internet falsehoods, but how to help them become more discerning and less reckless about what they share.
One of the leading reasons misinformation about the COVID-19 pandemic gains traction is that it’s a topic that scares the daylights out of us. The more emotional valence something we read online has, the likelier we are to pass it on—either to share the joy if it’s something good or unburden ourselves if it’s bad.
“Our research has shown that emotion makes people less discerning,” says David Rand, associate professor at the MIT School of Management and a co-author of the new study. “When it comes to COVID-19, people who are closer to the epicenter of the disease are likelier to share information online, whether it’s true or false.”
That’s in keeping with earlier research out of MIT, published in 2018 showing that fake news spreads faster on Twitter than does the truth. The reason, the researchers in that study wrote, was that the lies “were more novel than true news …[eliciting] fear, disgust and surprise in replies,” just the things that provide the zing to sharing in the first place.
Political leanings also influence what’s shared and not shared. A 2019 Science study, from researchers at Northeastern, Harvard, and SUNY-Buffalo, showed that neither the left nor the right has a monopoly on sharing fake news or real news, with both ends more or less equally mixing fact and fiction. Just which fact and just which fiction they chose, however, was typically consistent with just which stories fit more comfortably with their own ideologies.
To dig deeper still into the cognitive processes behind sharing decisions, Rand and colleagues developed a two-part study. In the first, they assembled a sample group of 853 adults and first asked them to take a pair of tests. One, known as the Cognitive Reflection Test (CRT) measures basic reasoning processes, often with questions that are slipperier than they seem. (For example: “If you are running a race and you pass the person in second place, what place are you in?” The seemingly obvious answer—first place—is wrong. You’ve simply replaced the second-place runner, but the person in first is still ahead of you.)
The other test was more straightforward—measuring basic science knowledge with true and false statements such as “Antibiotics kill viruses as well as bacteria” (false); and “Lasers work by focusing sound waves” (false again).
Finally, the entire sample pool was divided in half. Both groups were shown the same series of 30 headlines—15 false and 15 true—about COVID-19, but they were instructed to do two different things with them. One group was asked to determine the accuracy or inaccuracy of the headlines. The other group was asked if they would be inclined to share the headlines online.
The results were striking. The first group correctly identified the truth or falsehood of about two thirds of the headlines. The second group—freed from having to consider the accuracy of what they were reading—reported that they would share about half of the headlines, equally divided between true ones and false ones. If they were taking the time to evaluate the headlines’ veracity, they would be expected to share at something closer to the rate of the first group—about two thirds true and one third false. “When people don’t reflect, they make a rapid choice and they share without thinking. This is true for most of us.” says Gordon Pennycook, assistant professor at the University of Regina School of Business in Saskatchewan, and lead author of the study.
Most, but not all. The study did find that people who scored higher on the CRT and basic science tests were a little less indiscriminate, tending to do a better job at both distinguishing false stories and at making better sharing decisions.
The solution, clearly, is not to force everyone to pass a reasoning test before they’re admitted online. Things are actually a lot easier than that, as the second part of the study showed.
For that portion, a different sample group of 856 adults was once again divided in two and once again shown the same set of headlines. This time, however, neither group was asked to determine the accuracy of the headlines; instead, both were asked only if they would share them. But there was still a difference between the two groups: One was first shown one of four non-COVID-9-related headlines and asked to determine whether it was true or false.
That priming—asking the participants to engage their critical faculties before beginning the sharing task—seemed to make a dramatic difference: The primed group was three times less likely to share a false headline than the unprimed group.
“Nudges like this help a lot,” Rand says. “If you get people to stop and think, they do a better job of evaluating what they’re reading.”
The researchers believe there are easy, real world applications that platforms like Facebook and Twitter could use to provide people the same kind of occasional cognitive poke they did in their study. “One idea we like is to crowd-source fact-checking out to users,” Pennycook says. “Ask people if [some] headlines are accurate or not; the platforms themselves could learn a lot from this too.”
Rand cautions against anything that could seem patronizing to readers—leaving them feeling like they’re being quizzed by some social media giant. Instead, he recommends a little bit of humility.
“You could stick little pop-ups into newsfeeds that say, ‘Help us improve our algorithms. Are these stories accurate?’” he recommends.
In no event is the Internet going to be scrubbed of all rubbish. For plenty of hucksters, politicos and conspiracy-mongers, the Internet’s hospitality to inaccuracies is a feature, not a bug, and there is little way to purge them entirely. But small interventions can clearly make a difference. And when it comes to information about the pandemic—on which life and death decisions may be made—the stakes for trying could not be higher.
More Must-Reads from TIME
- Why Trump’s Message Worked on Latino Men
- What Trump’s Win Could Mean for Housing
- The 100 Must-Read Books of 2024
- Sleep Doctors Share the 1 Tip That’s Changed Their Lives
- Column: Let’s Bring Back Romance
- What It’s Like to Have Long COVID As a Kid
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Jeffrey Kluger at jeffrey.kluger@time.com