• Tech
  • Internet

The New Zealand Attack Exposed How White Supremacy Has Long Flourished Online

7 minute read

Before opening fire on worshippers at two mosques in the New Zealand city of Christchurch last Friday, killing 50 and wounding many more, the far-right terrorist who carried out the attack uploaded a post to an anonymous online message board called 8chan.

“Well lads, it’s time to stop shitposting and time to make a real life effort post,” he wrote, shortly before live-streaming the massacre on Facebook. Then, addressing 8chan’s denizens, he added a request: “Please do your part by spreading my message, making memes and shitposting as you usually do.”

His appeal was heeded by a decentralized, international network of white supremacists and their sympathizers, many of them users of anonymous message boards like 8chan, who proceeded to repost copies of the horrific video to sites like Facebook and YouTube. More than 1.5 million copies of the video were uploaded to Facebook within 24 hours, with 300,000 bypassing its upload filter to appear for users as young as 13 years-old to watch. YouTube did not release numbers, but told the Guardian the uploading was “unprecedented both in scale and speed — at times as fast as a new upload every second.”

The shocking repost campaign illustrated not only the unprecedented reach of online white supremacist networks, but also how they have learned to exploit new tools offered by the large, ill-equipped social media platforms in order to spread messages of hatred online.

The white supremacist ideology espoused by the New Zealand terrorist has been a fringe conspiracy theory for decades, but the Internet has given it unprecedented reach and pushed it into mainstream political discourse. The trend began as early as the 1980s, when white supremacists recognized that they could get around constraints imposed by gatekeepers in the traditional media by posting their own resources on the web.

In the 1990s, new anonymous message boards became spaces where users could post whatever text, images and links they wanted, with no usernames to tie a post to the person posting it. That allowed people to “say disgusting things, post violent and horrible images, link to disgusting videos and blogs, all without any fear that their personal relationships would be disrupted,” says Joan Donovan, a Harvard researcher on white supremacist online networks. “That format really incentivizes some of the worst kinds of behavior and disgusting kinds of content.”

These sites, initially forums populated by privacy-minded, tech-savvy users looking to have fun online, quickly became spaces where irony-laden memes, jokes and discussions mixed with increasingly sinister racism and misogyny. The most popular was 4chan, founded in 2003 and still online today. “4chan was certainly problematic” in the early days, says Whitney Phillips of Syracuse University, who has been following the development of “chan culture” for years. “But it didn’t have a clear politics the way we think of it as having a clear politics now.” Today, 4chan is virtually synonymous with the far-right. But back in the 2000s and even early 2010s, Phillips says, anybody who took themselves too seriously was a target for ridicule and sometimes harassment.

The turning point came in August 2014, Phillips says. That was when “Gamergate” took off — a harassment campaign targeting female video-gamers, coordinated partly on 4chan by users who were furious about what they saw as the unacceptable influence of progressive values, including feminism, over video-gaming culture.

While not having much immediately to do with white supremacy, Phillips says, Gamergate was important because 4chan banned discussion of it altogether by September 2014, spurring the migration of users to 8chan, a different forum founded in 2013 which quickly became an even more fetid hotbed of sexism and other far-right ideas like white supremacy. White supremacists, including the owners of the infamous Daily Stormer, quickly realized the recruitment possibilities that 8chan offered, according to Phillips, and became active posters there. “8chan is somewhere where people voice views that are as toxic as they come, and are celebrated for it,” says Peter W. Singer, a strategist at the New America think tank and author of Like War: The Weaponization of Social Media.

Users of 4chan and 8chan were early support bases for Donald Trump when he announced his candidacy for the U.S. presidency in 2015. Some users hailed him as a “meme candidate,” while others, who had spent time sharing white nationalist content on the sites, saw in him the chance to propel their politics into the mainstream. The Trump campaign flirted with that constituency, retweeting white supremacist accounts and echoing their talking points as campaign promises, like banning Muslims from the U.S. and building a wall on the border with Mexico. Those policies, to be sure, were popular outside of chan culture — but received a crucial boost from its prowess in online amplification.

“Over the last 15 years, political views that had previously been kept out on the fringes have started to become accepted by nationalist populist leaders wanting to attract voters,” says Eirikur Bergmann, a professor at Bifrost University in Iceland and the author of Conspiracy and Populism. He identifies social media as the catalyst in disseminating such views, in a format now commonly referred to as “fake news.”

“White supremacy has moved [from anonymous message boards] into broader Internet culture and American politics,” says Singer. “The things being voiced on these anonymous message boards have also been voiced by the President of the U.S., who is simultaneously the most powerful social media figure in the world.”

Reluctant to police what could and could not be said on their platforms, the big social media sites for years cited a broad policy of free speech. Casting themselves as platforms for people to post information, rather than publishers of that information, allowed them to legally distance themselves from responsibility for content posted on their sites. In recent months and years, that defense has started to wear thin. Even though Facebook and YouTube have nominally outlawed “hate speech” for years, they only banned Alex Jones, a prominent and routine peddler of far-right conspiracies, in 2018. Acknowledging the influence hate speech, fake news and trolling are having on global politics, more and more governments around the world are considering or even implementing laws that place the responsibility for hate speech and fake news back onto social media companies like Facebook and Twitter.

In the wake of the New Zealand attack, Internet service providers there and in Australia (where the shooter was from) have begun blocking 4chan, 8chan, and LiveLeak, a streaming website that hosts violent and other unsavory videos. But before the attack, across the world, Internet service providers and governments (which do have the power to block access to such sites, at least for less technologically-savvy users) have done little to take websites where racism thrives offline.

Worldwide horror at the New Zealand attack, and the complicity of social media sites in broadcasting it to the world, might change the overall lack of momentum. “We cannot,” said New Zealand Prime Minister Jacinda Ardern after the attack, “simply deal with some of the issues we face with our social media … on a case-by-case basis.” Raising the possibility of global regulation of the Internet, she said the world must take “a united front on a global issue.”

“On one hand, the virality is horrific, but it also points to pathways out,” says Singer. “Think of all the ways we’ve made ISIS’s job much more difficult over the past several years. The online landscape used to be plagued with ISIS black flags, and there was a wide variety of intervention against them, from technology companies screening them out, to action from law enforcement to track down members, to go after funding streams, to cyber action to take them offline, to arrests and people literally being taken out by drone strikes.”

“Think about all that, and then the absence of that for far right extremists,” Singer continues. “These methods work. They don’t eradicate extremism, but they definitely make it a lot harder.”

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com