• Ideas
  • Social Media

‘This Is How Rats Work.’ Why Twitter’s Emphasis on Follower Counts Could Be Backfiring

8 minute read
Updated: | Originally published: ;

Online follower counts have become a fashionable form of currency, numbers people use as evidence of personal and professional clout. Media outlets treat it as news when celebrities amass big followings, and an entire industry has ascended around “influencers” who endorse goods via popular feeds. It’s a metric increasingly ingrained in modern life. It’s also under the microscope at Twitter.

CEO Jack Dorsey has expressed a willingness to rethink not just policies but the platform’s fundamental design as Twitter continues to grapple with issues ranging from hate speech to disinformation campaigns. The company’s “singular priority” is increasing the health of conversation on the platform, Dorsey said over and over during a recent grilling on Capitol Hill. And, he added, features like follower counts — along with prominent buttons for likes, retweets and replies — may be giving users counterproductive “incentives.”

What incentives are those? That he was less specific about. But academics who are trying to understand the effect that social media is having on human behavior have some ideas: While such features are no doubt rewarding, for users as well as Twitter’s bottom line, experts say they may also be contributing to a culture of mindless outrage and making people more susceptible to manipulation.

Humans are very social creatures, in constant search of information about whether they belong, says Jay Van Bavel, an associate professor of psychology at New York University. And features like follower counts function as tantalizing “signs of status,” real-time metrics of popularity on display for all to see.

On Sept. 20, Kanye West criticized this aspect of the platform’s design. “[W]e should be able to participate in social media without having to show how many followers or likes we have,” the star tweeted. “This has an intense negative impact on our self worth.” It feels bad when those numbers don’t roll in. And when those counts go up, users get boosts of pleasure. So people are therefore incentivized to do whatever it takes to make those numbers to grow. “This is how rats work. Press a lever, get a pellet,” Van Bavel says. “The human brain is structured the same way.”

Unfortunately, research suggests that extreme content tends to generate a lot of engagement. If Twitter is a competition, being sensational is an easy way to boost your score.

In his work, Van Bavel has found that tweets containing strong moral and emotional language, what he sums up as “moral outrage,” are about 20 percent more likely to get retweeted. This is correlated to the fact that politicians on the ideological extremes — be they Democratic or Republican — tend to have bigger followings than their moderate counterparts. “Taking strong, polarizing stances, that’s what gets rewarded,” Van Bavel says, describing President Donald Trump as “off the charts” in this department.

And each time someone is affirmed by “likes” or new followers for a post, they’re trained to share the same kind of content in the future. “If bad behavior or provocative and angry and ranting tweets lead to more attention,” says Deb Roy, director of MIT’s Lab for Social Machines, “well, you’ve just incentivized that behavior on the platform.”

If being deliberate and nuanced yields dispiriting crickets, it’s easy to see how users could get into a cycle in which they’re constantly trying to find things to get outraged about instead. Algorithms that select for popular content may be prioritizing and spreading the very types of posts that make people the most angry, says Yale psychologist Molly Crockett, and the social usefulness of outrage may be depleted in the process.

Explosions of indignation and shame can serve as important regulators, teaching lessons to group members about when they’ve crossed a line and acting as deterrents for others. “The concern about social media is that if these platforms incentivize the extended expression of outrage,” Crockett says, “it might create a signal to noise problem.” If everything is worthy of outrage, she says, then effectively nothing is.

The fact that polarizing stances tend to get attention meanwhile contributes to the echo chamber problem. While “tribal language” is more likely to go viral, Van Bavel says, it’s usually going viral among like-minded people. Users are losing cross-talk, yet they may not realize it, because there is no tally that quantifies negative feedback on platforms like Twitter.

And the fact that the platform is designed like a game — one that encourages speed in fast-moving news cycles — also dovetails with Twitter’s disinformation problem. If bad actors in Russia or elsewhere know that users will respond to outlandish claims, Van Bavel says, they can exploit that psychology. One study found that false news spreads six times faster than real news on Twitter, at least in part because the lies do a better job of generating feelings of shock and disgust. That’s not a hard recipe to follow.

In the race to get “likes” from like-minded peers, users may share links they’ve never read. (One study found that six out of every ten links on Twitter get retweeted without being clicked on.) And users may even share information they know is untrue. MIT cognitive scientist David Rand has found that people will spread inaccurate information on social media even when they say it’s very important to only share vetted stuff. “When you’re actually in the zone,” he says, “you don’t by default think about that.” The prospect of getting “immediate good social feedback,” Rand suggests, can be overwhelming.

The good news is that Rand’s research also suggests people will make more thoughtful calculations if they just slow down. “In some sense,” he says, “the goal is to create a chilling effect.” He’s now experimenting with a related feature on Facebook: If people are exposed to a reminder to think about accuracy before they start scrolling through their feed, might that change their decisions about what to share?

While it’s very hard to imagine Twitter getting rid of follower counts, especially given that an economy has developed around them, it’s possible to imagine some friction being added back into the system’s design. Over the years, Twitter has streamlined the platform in ways that make it easier for users to echo one another. (And, critics might argue, in ways that make the platform more addictive.) But there was a time, for instance, when there was no retweet button at all. To spread someone else’s idea before late 2009, one had to think about it long enough to copy and paste the text, observe the person’s handle and then tweet it out themselves.

West’s tweet raised another interesting option that could shake up incentives on the platform: making the exhibition of “likes” and followers optional. “Just like how we can turn off the comments,” he tweeted, “we should be able to turn off the display of followers.”

Experts suggest there may also be a better way to gamify the system. If retweets and followers aren’t necessarily indicators of high quality discussion, then what is? Twitter is in the midst of trying to figure out how to measure the health of conversation on the platform, a process that might reveal some ideas.

MIT’s Roy, who previously worked for the company and has helped inspire it in this work, says that he envisions creating the equivalent of an air quality index for social media, something that reflects whether a tweet is generating incivility and attacks versus discussion and shared reality.

On the one hand, he says, artificial intelligence isn’t remotely capable of that kind of analysis at this point. On the other, one can learn something useful simply by counting up the number of “f*** you’s” a user is generating. Perhaps there are features that could reward tweets that produce “Eureka!” moments or factual corrections. The big dream, Roy says, is to develop indicators that create a “race to the top” among users, “to build a leaderboard for having a positive effect on the public discourse.”

Dorsey has suggested the company is experimenting with features that might promote alternative viewpoints and label bots, for a start. By changing or replacing any features that have proved to keep users on the platform, Twitter risks hurting its business in the short term. But while appearing on Capitol Hill, the CEO suggested that increasing the health of public conversation, whatever form that might take, is a “long-term growth factor” that matters more than whether the number of users initially goes down.

“Otherwise,” he said, “no one is going to use it in the first place.”

More Must-Reads from TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.