In an apparent effort to ensure their heinous actions would “go viral,” a shooter who murdered at least 49 people in attacks on two mosques in Christchurch, New Zealand, on Friday live-streamed footage of the assault online, leaving Facebook, YouTube and other social media companies scrambling to block and delete the footage even as other copies continued to spread like a virus.
The original Facebook Live broadcast was eventually taken down, but not before its 17-minute runtime had been viewed, replayed and downloaded by users. Copies of that footage quickly proliferated to other platforms, like YouTube, Twitter, Instagram and Reddit, and back to Facebook itself. Even as the platforms worked to take some copies down, other versions were re-uploaded elsewhere. The episode underscored social media companies’ Sisyphean struggle to police violent content posted on their platforms.
“It becomes essentially like a game of whack-a-mole,” says Tony Lemieux, professor of global studies and communication at Georgia State University.
Facebook, YouTube and other social media companies have two main ways of checking content uploaded to their platforms. First, there’s content recognition technology, which uses artificial intelligence to compare newly-uploaded footage to known illicit material. “Once you know something is prohibited content, that’s where the technology kicks in,” says Lemieux. Social media companies augment their AI technology with thousands of human moderators who manually check videos and other content. Still, social media companies often fail to recognize violent content before it spreads virally, letting users take advantage of the unprecedented and instantaneous reach offered by the very same platforms trying to police them.
Neither YouTube, Facebook nor Twitter answered questions from TIME about how many copies of the Christchurch video they had taken down. New Zealand police said they were aware the video was circulating on social media, and urged people not to share it. “There is extremely distressing footage relating to the incident in Christchurch circulating online,” police said on Twitter. “We would strongly urge that the link not be shared.” Mass shooters often crave notoriety, and each horrific event brings calls to deny assailants the infamy they so desire. (Four arrests were made after the Christchurch shooting, and it remains unclear whether the shooter who live-streamed the attack acted alone.)
Facebook said that the original video of the attack was only taken down after they were alerted to its existence by New Zealand police, indicating that an algorithm had not noticed the video.
“We quickly removed both the shooter’s Facebook and Instagram accounts and the video,” a Facebook spokesperson said. “We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware.”
Experts say the Christchurch video highlights a fatal flaw in social media companies’ approach to content moderation.
“It’s very hard to prevent a newly-recorded violent video from being uploaded for the very first time,” Peng Dong, the co-founder of content-recognition company ACRCloud, tells TIME. The way most content-recognition technology works, he explains, is based on a “fingerprinting” model. Social media companies looking to prevent a video being uploaded at all must first upload a copy of that video to a database, allowing for new uploads to be compared against that footage.
Even when platforms have a reference point — the original offending video — users can manipulate their version of the footage to circumvent upload filters, for example by altering the image or audio quality. The better “fingerprinting” technology gets, the more variants of an offending piece of footage can be detected, but the imperfection of the current systems in part explains why copies of the video are still appearing on sites like YouTube several hours after the initial assault. “Please know we are working vigilantly to remove any violent footage,” YouTube said in a statement.
Social media companies are also experimenting with machine learning to detect violent footage the first time it is uploaded, the experts say, but the algorithms are not advanced enough yet to reliably take down such footage. One could easily imagine a situation, Lemieux says, where an algorithm confuses footage of a first-person-shooter video game with real-life violent footage, for example.
Human moderators are fallible, too. The job is psychologically grueling, as a recent report from The Verge illustrates, with workers exposed to the most grotesque footage imaginable on a daily basis for low pay and with minimal mental health support. Facebook, YouTube and Twitter each employ thousands of content moderators around the world; many have recently promised to take better care of their workers.
What’s especially challenging about the Christchurch video is that the attack wasn’t recorded and uploaded later, but livestreamed in real-time as it unfolded. And with current AI technology, it’s all but impossible to detect a violent scene as it is being live-streamed — and to quickly take down that stream while it’s still happening. So on platforms like Facebook Live, YouTube Live and the Twitter-owned Periscope, all of which give users the ability to go live anywhere, any time, rapid content moderation is a nearly impossible task. “There’s no perfect technology to take down a video without a reference database,” says Dong.
Several murders and other horrific acts have been broadcast on Facebook Live before. But the Christchurch massacre appears to have been the first time a mass shooter specifically chose to live-stream the killing of dozens, and they appear to have done so in an effort to ensure their action would spread around the Internet, carrying their hateful message with it. Social media companies like Facebook and YouTube, already under scrutiny for a wealth of reasons, will now be left once again to defend their content moderation practices — and explain why a mass murderer was so easily able to manipulate their systems to have his heinous act seen around the world in a matter of minutes.
More Must-Reads from TIME
- How Donald Trump Won
- The Best Inventions of 2024
- Why Sleep Is the Key to Living Longer
- Robert Zemeckis Just Wants to Move You
- How to Break 8 Toxic Communication Habits
- Nicola Coughlan Bet on Herself—And Won
- Why Vinegar Is So Good for You
- Meet TIME's Newest Class of Next Generation Leaders
Write to Billy Perrigo at billy.perrigo@time.com