• Tech
  • Social Media

Reddit Allows Hate Speech to Flourish in Its Global Forums, Moderators Say

17 minute read

When Reddit moderator asantos3 clicked on a thread inside the group r/Portugueses in December and found it full of racist comments, he wasn’t exactly surprised. The group is often home to nationalist and nativist rhetoric, and in this instance, users here were responding angrily to a new law that allowed increased freedom of movement between Portuguese-speaking countries including African nations like Mozambique and Angola. “Wonderful, more stupid Blacks to rob me in the street,” read one comment in Portuguese, which received 19 likes. “This Africanization of Portugal can only lead the country to a third-world backwardness,” read another.

So, asantos3, who moderates the much larger and more mainstream group r/Portugal, quickly sent a report to Reddit staffers with a link to the thread. Within minutes, he received an automated response: “After investigating, we’ve found that the reported content doesn’t violate Reddit’s Content Policy.”

The response was disappointing but predictable for asantos3, who has served as a volunteer content moderator for six years. As part of his duties, he deletes comments that contain racism, homophobia, sexism and other policy violations, and sends reports to Reddit about hate speech coming from smaller satellite groups like r/Portugeses. Asantos3 spoke on the condition that he would be identified only by his Reddit handle. He says his duties have led to him being doxxed—with personal details including his Instagram and LinkedIn profiles posted online— and threatened. And asantos3 says that the company itself has repeatedly ignored reports of harassment from him and other moderators. “We mostly stopped reporting stuff, because we don’t have feedback,” he says. “We have no idea if they read our reports, or if there are even Portuguese-speaking people in the company.”

Reddit’s problem is a global one, say current and former moderators. Indian subreddits like r/chodi and r/DesiMeta include Islamophobic posts and calls for the genocide of Muslims. In subreddits about China like r/sino and r/genzedong, users attack Uyghurs and promote violence against them. And members of r/Portugueses regularly traffic in anti-Black, anti-Roma and anti-immigrant sentiment.

READ MORE: The Subreddit /r/Collapse Has Become the Doomscrolling Capital of the Internet. Can Its Users Break Free?

“Anything outside the anglosphere is pretty much ignored, to be honest,” 11th Dimension, a former moderator of r/Portugal who stepped down from his role due to burnout, says. “It’s hard to convey to the company what’s racist and what’s not when the admins are so far from the details and the cultural differences.”

TIME spoke to 19 Reddit moderators around the world who shared similar stories and concerns about the San-Francisco-based company’s reluctance to control hate-speech in its non-English language forums. Nearly all of the moderators agreed to speak on the condition that their real names would not be published because they say they have received death threats and other attacks online for their work.

This all-volunteer corps of moderators, of which there are at least tens of thousands, is only growing in importance for the company. Reddit announced in December that it intends to make an initial public offering of stock in 2022. The company was recently valued at $10 billion, is one of the 25 most visited websites in the world according to multiple trackers and has made its international expansion a key aspect of its post-IPO growth strategy. But some of its most devoted users—its unpaid moderators—argue that while the company aims to be the “front page of the internet,” it has not invested in the infrastructure to combat vile content that is rife on many of its non-English language pages.

Reddit has acknowledged that its expansion to international markets makes policing its platform more difficult, and some moderators said the company has taken steps in recent months to correct the longstanding problems. “When we begin to open in non-English speaking countries, moderation does get more complex,” a Reddit spokesperson said in a statement to TIME. “We are investing now to build and hire for non-English capabilities and add support for more languages.”

READ MORE: Facebook Let an Islamophobic Conspiracy Theory Flourish in India Despite Employees’ Warnings

These problems are not unique to Reddit. Facebook, Twitter and YouTube have each struggled to contain hate speech and misinformation as they pushed into new markets around the world. Facebook groups and posts, for example, have been linked to real-world violence in India, the Philippines, Myanmar and other countries even as the platform spends billions of dollars a year on safety and security. This year, other Silicon Valley companies will be watching closely as Reddit embarks on a precarious balancing act: to gain legitimacy and generate revenue while retaining its freewheeling, decentralized structure. Can the company preserve free speech while protecting its users? And will its model of running a lean operation with few paid staffers allow it to adapt to the responsibilities of hosting growing, diverse communities around the world?


More from TIME


Many moderators and analysts are skeptical. “Reddit has very little incentive to do anything about problems [in subreddits] because they see them as a self-governing problem,” Adrienne Massanari, an associate professor at American University who has been studying Reddit for years and wrote a book on its communities, says. “They’re creating a very successful business model in pushing work to moderators and users, who have to be exposed to horrific stuff.”

Using dog whistles to get around the rules

Reddit Inc. co-founder and CEO Steve Huffman looks on during a hearing with the House Communications and Technology and House Commerce Subcommittees on Oct. 16, 2019 in Washington, DC. The hearing investigated measures to foster a healthier internet and protect consumers.
Reddit Inc. co-founder and CEO Steve Huffman looks on during a hearing with the House Communications and Technology and House Commerce Subcommittees on Oct. 16, 2019 in Washington, DC. The hearing investigated measures to foster a healthier internet and protect consumers.Zach Gibson—Getty Images

Reddit, founded in 2005, is essentially a messaging board, but it could be compared to a high school extracurriculars fair. The site comprises hundreds of self-contained forums arranged by varied interests, from sports to makeup to art to pets. While many of these subreddits are innocuous, it’s no secret that Reddit has long been a haven for unseemly behavior. Reddit CEO, Steve Huffman, even explicitly stated in 2018 that racism was not against Reddit’s rules, elaborating that “on Reddit there will be people with beliefs different from your own, sometimes extremely so.”

However, over the two years—following intense criticism rained down on the company over its hate speech and harassment policies, including in the wake of the murder of George Floyd—the company backed away from its original hands-off ethos and has been hard at work to clean up its communities and clamp down on noxious, racist behavior. Toxic communities like r/The_Donald have been banned; AI-powered tools aimed at curbing hate speech and abuse have been rolled out; backchannels between moderators and company employees have been established.

READ MORE: Reddit Places a ‘Quarantine’ on The_Donald, Its Largest Community of Trump Supporters

But many non-English moderators say that cleanup has not extended to the pages they monitor. R/India is one of the largest national subreddits, with 693,000 members. There, users will typically find a fairly tame mix of news links, memes and local photos. That’s partly down to the hard work of unpaid moderators to remove Islamophic content. A group of five r/India moderators, speaking to TIME over a Zoom call, say they can spend several hours a day actively responding to queries, removing hate speech and banning rogue accounts. (Old moderators approve the applications of new ones; the primary draws of the gig, according to moderators, are community-building and the ability to help shape a discourse.)

One moderator for r/India has served in his role since 2011, when there was a more laissez-faire approach. Moderators soon realized that a hands-off moderation style “wasn’t working because it allowed the worst people to dominate the conversations,” he says. “There would be lots of people just saying things like ‘Muslims need to die.’”

When moderators began to block these users, some would simply return with a new account and taunt them, creating an endless game of whack-a-mole. Moderators say they saw other users instead start or join offshoot groups that allowed more controversial posts.

The largest of those r/India offshoots currently is r/Chodi, which was created in 2019 and has 90,000 members who create hundreds of posts a day. R/Chodi—which translates as a crude slang in Hindi—contains ample examples of far-right Hindu nationalism that often spills over into hate speech and sectarian bigotry. Dozens of posts a week denigrate Islam, often depicting Muslims as ignorant, violent or incestuous.

“Poorer, dumber, breeding like rats. They’ve got it all,” one post says about Muslims in India, which is still online. “India needs to eliminate them before they rise up,” read another, which has since been deleted. (R/Chodi’s increased popularity has coincided with a steep rise in religious hate crimes in India.)

As r/Chodi has faced criticism from communities like r/AgainstHateSpeech, the group’s own moderators have made efforts to halt the most overt examples of hate speech, including creating a list of banned words. But r/Chodi posters have simply turned to code words and increasingly slippery rhetoric, to get around the moderators and Reddit’s AI-driven natural language processing techniques, according to r/India moderators. Muslims are referred to using coded language such as “Abduls,” “Mull@s,” “K2as,” or, derisively, “Peace loving” people. Christians are referred to as “Xtians”; while Pakistan is called “Porkistan.”

Reddit said in a statement that automation and machine learning “help moderators remove 99% of reported hateful content.” But, studies have shown that AI is far less powerful when working outside the language it was designed in.

The moderators who spoke with TIME say they have tried to flag these alternative slurs to the Reddit administrators, paid employees who are largely based in the U.S., but have been mostly ignored.

“I have tried to report these comments 20 or 30 times, easily,” a second r/India moderator says. “I’ve tried to collate these slurs and send them the translations, but it was never even replied to.”

In a statement responding to the moderator’s claim, Reddit wrote that “harassment, bullying, and threats of violence or content that promotes hate based on identity or vulnerability” are prohibited on the platform and that they “review and work with communities that may engage in such behavior, including the subreddit in question.”

Extremists around the world use code words in a way similar to the users of r/Chodi. The user DubTeeDub—who moderates r/AgainstHateSubreddits and wrote a widely shared open letter last year excoriating racism on the platform and demanding change—says that Reddit’s administrators have failed to keep up with racists’ constantly evolving dog whistles, such as Neo-Nazis putting Jewish names in triple parentheses to signal their identity.

“It’s very clearly a white supremacist symbol, but the admins will just say, ‘that seems fine to me,’ and they’ll ignore it,” DubTeeDub, says.

But the moderators of r/India feel that Reddit is not only allowing hate speech to spread on r/Chodi and other similar groups, but actively pushing users toward the group. They have found posts from r/Chodi within r/India itself, algorithmically suggested as “posts you may like” and giving the subreddit a veneer of tacit official approval.

“These are very hateful subs, and we don’t want our subscribers going there,” a second r/India moderator says. “They can discover them on their own, but that should not be happening from inside our sub.”

Reddit’s volunteer moderators face threats

The fraught interplay between r/India and r/Chodi is emblematic of cat-and-mouse games playing out in subreddits in other parts of the world, especially as far-right political groups amass power in many countries and gain legions of followers.

In Portugal, r/Portugueses (6,900 members) is filled with anti-Roma and anti-Semitic rhetoric, homophobia, and racist depictions of Africans. “How is it possible for someone to want to see a place like this full of Africans, Brazilians, Indians and I don’t know what else?” posted one commenter alongside an idyllic illustration of a Portuguese town.

A screenshot from the Reddit community r/Portugueses, which often includes anti-Black, anti-Roma and anti-immigrant sentiment. “How is it possible for someone to want to see a place like this full of Africans, Brazilians, Indians, and I don't know what else?,” the caption reads in Portuguese.A screenshot from the Reddit community r/Portugueses

Concerned moderators have attempted to report these posts and, in turn, become targets of abuse. One of the most common tactics is for zealous users to band together and report moderators for invented reasons in an effort to get them suspended or banned by unsuspecting admins. DubTeeDub says these types of tactics have led to his suspension at least seven times.

But the attacks often turn much more personal and vicious, as trolls dig up moderators’ personal information. Asantos3, the r/Portugal moderator, says he’s been stalked across LinkedIn and Instagram. One user offered Bitcoin to anyone who could find out his address. “It’s so weird, but some of these actions are so common that we kind of ignore them now,” he says.

In Brazil, a São Paulo-based student and r/Brasil moderator who gave his name as Tet said he was threatened and doxxed when he and other moderators tried to crack down on the hate speech on r/Brasilivre (176,000 members), on which users post transphobia, anti-Black racism and homophobic slurs. “Stay smart because we’re watching you. Don’t think I’m the only one,” wrote one commenter in Portuguese. “I will find each one of you and kill you slowly.” Another user posted Tet’s address and personal Facebook account, writing, “Just let the hate flow and f— with them… bring trouble to their lives.” Neither of those posters have active accounts anymore, and Tet has since stopped moderating the subreddit partly due to burnout.

Perhaps it’s not surprising that there’s a high level of fatigue among moderators, who are often forced to see the worst aspects of Reddit on a daily basis. One r/India moderator tells TIME that women are especially vulnerable to harassment. “I know female mods are regularly hounded, targeted, not given space: it’s not a place to identify as a woman,” he says.

How Reddit can move forward

Many other social media platforms are struggling to balance free speech ideals with the aggressive spread of hate speech and misinformation on their platforms.

This fall, documents released by the whistleblower Frances Haugen showed that Facebook deprioritized efforts to curtail misinformation. In July, Black soccer players for England’s national team received torrents of racist abuse on Facebook and Twitter following the Euro 2021 Championship final, provoking British Prime Minister Boris Johnson to demand “the urgent need for action” from social media companies. In India, Facebook allowed Hindu extremists to operate openly on its platform for months, despite being banned by the platform.

Facebook, in response to criticism, has pledged to bolster its safety team and resources: it has 40,000 employees working on safety and security alone. Reddit, similarly, is pledging to ramp up its efforts, although its team is skeletal in comparison. Over the last year, the company has expanded its workforce from 700 to 1,300.

A Reddit spokesperson said that the company opened offices in Canada, the U.K., Australia and Germany, and would “continue to expand to other countries” in an effort to get closer to their global communities. Reddit created a Mod Council to receive feedback from moderators last year. It is also testing a new feature to give users more advanced blocking capabilities to limit the mobilizing power of extremists, harassers and bigots. In October 2021, the company posted a statement laying out statistics about its efforts toward “internationalizing safety,” and wrote, “The data largely shows that our content moderation is scaling and that international communities show healthy levels of reporting and moderation.”

Many Reddit moderators feel the site’s system of using volunteer moderators is less healthy than the company suggests. “There are a lot of people who just move on,” Jonathan Deans, a Scotland-based moderator of r/worldnews, says. “They’re like, ‘I’m sick of doing this. We just remove hateful comments all day, and what do we get out of it? Not really anything.”

Massanari, the American University professor, argues that Reddit’s problems will continue to worsen without a concerted internal effort. “Reddit’s defense has been, ‘If you ignore these spaces, they’ll go away,’” she says. “But the scholars and experts who have researched extremism and hate speech for years have clearly said that the more you allow that stuff to continue, you get more and more extreme versions of it.”

“We take safety extremely seriously and are committed to continuously enhancing our policies and processes to ensure the safety of users and moderators on our platform,” Reddit said in a statement. “We are seeing some improvements in the prevalence of hateful content as a result of our efforts, and we will continue to invest in our safety capabilities as well as moderator tools and resources.”

Ellen Pao, the former interim chief executive of Reddit and current CEO of Project Include, agrees that the company’s unpaid moderation model has severe limits. When she led the company in between 2014 and 2015, Pao made it a priority to take down revenge porn and unauthorized nude photos and to ban toxic communities like the fat-shaming community r/fatpeoplehate, which spurred a huge backlash from many of Reddit’s most active users. Pao says that Silicon Valley has historically sidelined efforts like these in favor of their bottom lines.

“You have these platforms that were founded by white men, who do not experience the same levels of toxicity, harassment and harm themselves, so they don’t see or understand these problems and let them fester,” she says. “It’s something they’ve been able to ignore for a long time.”

Pao says that hiring more people whose jobs involve confronting these issues is the first step. “If you really care about your users, and if you really want to prevent harassment and harm, then why wouldn’t you take on those roles yourself?” she says.

Back in Portugal, the moderator asantos3 is still spending his free time trying to clean up Portuguese-language subreddits. After receiving the automated message about the racist thread, he sent a frustrated note with more details to the Reddit’s staff administrators. This time, an admin wrote back—a rare occurrence in itself. But the note only reinforced the gap between him and the company: “I think some things may be getting lost in the translations here but am happy to take another look,” the admin wrote. “It would also help if you were able to explain a bit more directly how the linked article promotes hate.”

Asantos3 responded with some details, and reported a few more comments in the thread, which asserted that the influx of Portuguese-speaking Africans would lead to “population replacement and genocide,” “kidnap and rape,” and “violent possessive monkey rage.” But he received the same automated brush-off and never heard back from a human. The whole thread, as of publication, is still online.

“I’m feeling frustrated,” he said. “I guess it doesn’t matter at all.”

More Must-Reads from TIME

Contact us at letters@time.com