• Business

Social Media Companies Vowed To Stop Videos of Terror Attacks. Buffalo Showed They Have More Work To Do

10 minute read

Three years after social media platforms committed to put an end to viral videos of terrorist attacks on their platforms, the attack in Buffalo, New York, has revealed that their efforts are still a work in progress.

A self-professed white supremacist targeted Black shoppers at a Buffalo supermarket on Saturday, killing 10 people and injuring three others, in what authorities said was a racist attack. He livestreamed his rampage via Twitch, a streaming platform owned by Amazon. Twitch said it took down the broadcast within two minutes of the violence starting—but that was enough time for copies of the video to be downloaded and shared widely on social media platforms over the weekend.

Many versions of these videos had been edited—with added text or blurring or cropping—in apparent successful attempts to evade the platforms’ automated removal systems, according to Jacob Berntsson, the head of policy and research at Tech Against Terrorism, a U.N.-backed project for countering online extremism. Copies of the video were circulating on Twitter and Facebook on Saturday and Sunday, according to multiple media reports.

“He knew that as long as there was time for people to watch and download, then this would spread [online] regardless of how quickly Twitch took it down,” said Berntsson in an interview on Monday, referring to the attacker. “Clearly an audience was ready and prepared to download this kind of stuff.”

The Buffalo shooter had openly shared his plans to target people of color in a livestreamed mass shooting for months on the chat app Discord, according to Bloomberg. He used the app to point people toward his livestream on Twitch, the report said. Discord did not respond to a request for comment.

Inspired by previous attacks

Live-streamed attacks by white supremacists are a potent radicalization tool for future extremists, and platforms have struggled to remove edited copies of the videos of past attacks. In March 2019, a white supremacist gunman in New Zealand massacred 51 people at two mosques in the city of Christchurch, livestreamed on Facebook. Months later, a man with a camera on his helmet attacked a synagogue in Halle, Germany, killing two and injuring two others, livestreamed on Twitch. Videos of both attacks were shared widely on social media platforms, prompting a game of whack-a-mole between tech companies and users.

Read More: ‘A Game of Whack-a-Mole.’ Why Facebook and Others Are Struggling to Delete Footage of the New Zealand Shooting

The Buffalo shooter was directly radicalized by those videos. In a manifesto posted online shortly before the attack, seen by TIME, the Buffalo shooter said that he was inspired by the Christchurch attacker’s politics, and that he decided to live stream his own attack in order to inspire others. He also said he chose to stream on Twitch because it had taken the platform 35 minutes to remove the livestream of the Halle attack.

Compared to the Halle attack, the two minutes that it took Twitch to remove the video of the Buffalo attack speaks to the progress tech companies have made since 2019. “That’s a very strong response time considering the challenges of live content moderation, and shows good progress,” Twitch said in a statement to TIME on Monday, adding that it was working hard to stop copies of the video being uploaded. Facebook’s parent company, Meta, and Twitter said that they had designated the videos under their violence and extremism policies shortly after the shooting, and were removing copies of it from their platforms, as well as blocking links to external sites where it was hosted.

Read More: ‘There’s No Such Thing As a Lone Wolf.’ The Online Movement That Spawned the Buffalo Shooting

Still, despite their progress, tech companies’ work so far has not been enough to stop the spread of these videos—either before they occur, during the livestream, or in the places where copies of the video are being reuploaded. “I’ll blame the platforms when we see other shooters inspired by this shooter,” says Dia Kayyali, the associate director for advocacy at digital rights group Mnemonic. “Once something is out there, it’s out there. That’s why the immediate response has to be very strong.”

How platforms are cooperating to stop terrorist content

The biggest platforms are now collaborating far more closely than they were at the time of other livestreamed terror attacks.

In the immediate wake of the New Zealand attack, many of the world’s biggest social media platforms signed onto the “Christchurch Call,” a commitment to stamp out the spread of terrorist content online. Through an industry group, the Global Internet Forum to Counter Terrorism (GIFCT), the platforms are sharing identifying data about the Buffalo shooter’s video and manifesto between them in order to make it easier to remove from their sites.

Through GIFCT, platforms share encoded versions of of terrorist content, known as hashes, that they have removed from their sites, allowing, for example, Facebook to quickly and easily remove a copy of a terrorist video that had only appeared on Twitter up to that point. Hashing is an efficient way of representing a video, photograph or other document as a string of numbers, instead of sharing the file itself. It is impossible to recreate a piece of content from its hash code, but identical content will always return the same hash code if run through the same hashing algorithm. If the hash code of a new piece of content is matched to an entry in a hash database of known illegal content, the tech companies can remove the content even if their own staff have never come across it before.

This makes hashing a good way for different platforms to share information about illegal content, such as terrorist propaganda or child abuse imagery, without having to distribute these files among themselves. Members include Facebook, YouTube, Amazon, Twitter and Discord.

The problem with hashing, however, is that a bad actor only needs to alter the file a small amount—for example by changing its color profile, or cropping the picture—to return a totally different hash code, and thus evade the platforms’ automated removal mechanisms. So, three years after the Christchurch attack, the only tool required to fool the platforms’ automated systems for removing terrorist content is basic video editing software, plus some persistence. This is known as “adversarial” behavior and makes the problem of scrubbing terrorist content from the internet far more difficult, according to Kayyali and Berntsson.

Read More: These Tech Companies Managed to Eradicate ISIS Content. But They’re Also Erasing Crucial Evidence of War Crimes

While hashing’s shortcomings are not the root cause of the problem, some counterterrorism experts say they are one of the core weaknesses in the platforms’ current joint approach to terrorist content. “The patchy response from various platforms that are [members] of the hash-sharing database arguably suggests that improvements can be made,” Berntsson says. “Platforms should be able to handle this, but it still speaks to the fact that there are groups of people who are quite committed to circumventing the moderation tools that are in place.”

In a statement, a Meta spokesperson said that hash-sharing is only one of many ways the company is dealing with the Buffalo video. The company has teams of human moderators looking for copies that have slipped through, and has also uploaded copies of the video and manifesto to its own internal databases. The company says it uses machine learning tools to catch lookalike copies of videos in the database even if they have been altered—although it’s clear from the video’s proliferation that these tools are not 100% accurate. But beyond hash sharing, the gains made by Meta’s computational resources and workforce will only help remove copies of the video from Meta’s own platforms, not other social media sites like Twitter, Twitch or TikTok. This means many companies are duplicating the work needed to identify and take down altered copies of the videos, at a time when human bandwidth is often the bottleneck in terms of enforcement against terrorist content.

A GIFCT spokesperson told TIME on Monday that the group was exploring ways other than hashing of sharing information about terrorist content between platforms, but said that those explorations had not progressed past their earliest stages.

Some in the sector have lost patience with the platforms. “I’m sure there’s issues with people remixing content and only posting a clip of it, and all of the tricks that we know to try to evade automatic detection,” says Kayyali, who sits on civil-society advisory boards for both the Christchurch Call and GIFCT. “But still, I want to hear exactly the technical explanation from GIFCT about how it was possible that hours after [they shared hashes of the video among the platforms] the video was still out there.”

A wider problem

Even if big tech platforms could scrub terrorist content entirely, it still flourishes on smaller platforms. As the shooter’s open planning on Discord showed, many of the people circulating the video are likely collaborating through private messaging channels and smaller social networks. Only 22 people watched the Buffalo attacker’s Twitch stream in real time, according to the Washington Post. But this was all it took for some of them—presumably directed to the stream by the attacker himself on Discord—to download and spread the video far and wide.

Most experts say that in addition to the big platforms, governments and the media also have a role to play. Under current U.S. law, domestic terrorists are not designated as terrorists in the same way that, for example, Islamist extremists are. This means platforms don’t have the legal certainty that they do when dealing with content from Al Qaeda and ISIS, which they have largely succeeded in scrubbing from their platforms. “Designation can help provide some more legal certainty and clarity for companies,” says Berntsson. His organization Tech Against Terrorism has its own tool similar to GIFCT for alerting platforms to terrorist content.

And while the shooter said in his manifesto that he had been radicalized by the video circulating online of the Christchurch attack, racist conspiracy theories have entered into the political mainstream via other channels, including cable television.

The Buffalo shooter spoke in his manifesto about his belief that white people were being intentionally replaced in the U.S. by people of other races—a conspiracy theory that has recently been picked up, and amplified, by the Fox TV show host Tucker Carlson and Republican politicians.

“It doesn’t start when this individual presses ‘stream’ on Twitch,” says Berntsson. “It starts long before.”

– WITH REPORTING BY VERA BERGENGRUEN/WASHINGTON, D.C.

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com