Filmed by a Syrian opposition soldier riding in the back of a pickup truck, the shaky footage posted to Facebook on Oct. 14 showed Turkish troops walking cheerfully across a rocky plain in northern Syria toward a town that had just been vacated by U.S. forces.
It was one in a flurry of videos posted by citizen journalist Abo Liath Aljazarawy to his Facebook page, chronicling a rapid reshaping of northern Syria’s political map after President Trump ordered U.S. troops to withdraw from the region earlier that month.
But days later, the videos had disappeared from the Internet. Facebook had banned his page.
Aljazarawy, who asked to be referred to by a pseudonym for his security, had been using the page, called Eye on Alhasakah after his hometown, to share information about the Syrian civil war with his 50,000 followers. Some days, like that one in October, he shared news of troop movements. On others, he posted videos or photographs showing the bloody aftermath of military attacks. The point, he says, was to keep ordinary people informed. “We don’t have loyalty to anyone,” Aljazarawy told TIME. “Most of our followers are just common people.”
But the videos and photographs he shared showing the ground reality of the Syrian civil war were the reason his page was banned. Facebook confirmed to TIME that Eye on Alhasakah was flagged in late 2019 by its algorithms, as well as users, for sharing “extremist content.” It was then funneled to a human moderator, who decided to remove it. After being notified by TIME, Facebook restored the page in early February, some 12 weeks later, saying the moderator had made a mistake. (Facebook declined to say which specific videos were wrongly flagged, except that there were several.)
The algorithms were developed largely in reaction to ISIS, who shocked the world in 2014 when they began to share slickly-produced online videos of executions and battles as propaganda. Because of the very real way these videos radicalized viewers, the U.S.-led coalition in Iraq and Syria worked overtime to suppress them, and enlisted social networks to help. Quickly, the companies discovered that there was too much content for even a huge team of humans to deal with. (More than 500 hours of video are uploaded to YouTube every minute.) So, since 2017, beg have been using algorithms to automatically detect extremist content. Early on, those algorithms were crude, and only supplemented the human moderators’ work. But now, following three years of training, they are responsible for an overwhelming proportion of detections. Facebook now says more than 98% of content removed for violating its rules on extremism is flagged automatically. On YouTube, across the board, more than 20 million videos were taken down before receiving a single view in 2019. And as the coronavirus spread across the globe in early 2020, Facebook, YouTube and Twitter announced their algorithms would take on an even larger share of content moderation, with human moderators barred from taking sensitive material home with them.
But algorithms are notoriously worse than humans at understanding one crucial thing: context. Now, as Facebook and YouTube have come to rely on them more and more, even innocent photos and videos, especially from war zones, are being swept up and removed. Such content can serve a vital purpose for both civilians on the ground — for whom it provides vital real-time information — and human rights monitors far away. In 2017, for the first time ever, the International Criminal Court in the Netherlands issued a war-crimes indictment based on videos from Libya posted on social media. And as violence-detection algorithms have developed, conflict monitors are noticing an unexpected side effect, too: these algorithms could be removing evidence of war crimes from the Internet before anyone even knows it exists.
On a rainy day in early January, Chris Woods leads the way up the narrow staircase of an end-of-terrace house on the campus of Goldsmiths University in southeast London. The top two floors here serve as the base of Airwars, which Woods founded in 2014 to hold militaries to account for civilian casualties. Out of this cramped headquarters, he and his small team have collected evidence of more than 52,000 civilian deaths, most gleaned from social media. They have forced the U.S.-led coalition in Iraq and Syria to share monthly information about civilians killed by collateral damage, and keep an eye on the Turkish and Russian militaries too. They also recently expanded to cover Somalia, and are working on a Yemen archive. All this is funded on a shoestring. “Our budget for the next year is around a third of a million pounds [$430,000] for everything that we do,” Woods says, in his attic office. “That’s roughly the price of a guided bomb.”
The removal of Eye on Alhasakah came as a blow to this tight operation. The page was one of the most comprehensive sources for news concerning northern and eastern Syria, says Mohammed al Jumaily, a conflict researcher for the group. “Its closure meant that we lost a key source of local reporting, which is already pretty sparse in that region.”
It was an example of how even one mistaken takedown can make the work of human rights defenders more difficult. Yet this is happening on a wider scale: of the 1.7 million YouTube videos preserved by Syrian Archive, a Berlin-based non-profit that downloads evidence of human rights violations, 16% have been removed. A huge chunk were taken down in 2017, just as YouTube began using algorithms to flag violent and extremist content. And useful content is still being removed on a regular basis. “We’re still seeing that this is a problem,” says Jeff Deutsch, the lead researcher at Syrian Archive. “We’re not saying that all this content has to remain public forever. But it’s important that this content is archived, so it’s accessible to researchers, to human rights groups, to academics, to lawyers, for use in some kind of legal accountability.” (YouTube says it is working with Syrian Archive to improve how they identify and preserve footage that could be useful for human rights groups.)
Most people working in conflict monitoring understand that the social media companies are in a tough position. Back in the attic in southeast London, Woods agrees that a lot of violent content has no place on social media sites. But he is frustrated with what he sees as three years of inaction by social networks when it comes to preventing valuable evidence from being lost forever, potentially reducing the chances of human rights violators being convicted of their crimes. “Our own view is that if they are determined to delete videos, photographs, posts and so on, we think it should be put into a cordoned off area that is only accessible to researchers with permissions,” Woods says. “Basically at the moment, it’s an all or nothing. They will delete an entire archive and everything gets permanently lost, as far as we can tell.”
Facebook and YouTube’s detection systems work by using a technology called machine learning, by which colossal amounts of data (in this case, extremist images, videos, and their metadata) are fed to an artificial intelligence adept at spotting patterns. Early types of machine learning could be trained to identify images containing a house, or a car, or a human face. But since 2017, Facebook and YouTube have been feeding these algorithms content that moderators have flagged as extremist — training them to automatically identify beheadings, propaganda videos and other unsavory content.
Both Facebook and YouTube are notoriously secretive about what kind of content they’re using to train the algorithms responsible for much of this deletion. That means there’s no way for outside observers to know whether innocent content — like Eye on Alhasakah’s — has already been fed in as training data, which would compromise the algorithm’s decision-making. In the case of Eye on Alhasakah’s takedown, “Facebook said, ‘oops, we made a mistake,’” says Dia Kayyali, the Tech and Advocacy coordinator at Witness, a human rights group focused on helping people record digital evidence of abuses. “But what if they had used the page as training data? Then that mistake has been exponentially spread throughout their system, because it’s going to train the algorithm more, and then more of that similar content that was mistakenly taken down is going to get taken down. I think that is exactly what’s happening now.” Facebook and YouTube, however, both deny this is possible. Facebook says it regularly retrains its algorithms to avoid this happening. In a statement, YouTube said: “decisions made by human reviewers help to improve the accuracy of our automated flagging systems.”
But Kayyali says there are signs that, for Arabic content especially, the way these algorithms work could be having a detrimental effect. Currently, Islamic extremist content from the Middle East probably makes up the bulk of training datasets, says Kayyali — though there’s no way to know for sure, because the platforms don’t share that information. It means that other Arabic content — like a video of the aftermath of a bombing, where the uploader has blamed “ISIS” in the accompanying text, for example — is also at risk of removal. “We have consistently seen Facebook and YouTube take down documentation of protests from the Arabic speaking world,” Kayyali says.
Despite the human toll of content moderation, conflict monitors say one way to make sure this content is kept online is for the social networks to hire more content moderators, and ensure they are paid and treated just as well as other staff. But both Facebook and YouTube are moving in the other direction — partly out of recognition that content moderation can be a demanding and emotionally damaging job, partly because computers are faster, and partly because running an algorithm is cheaper than employing skilled humans. “This technology is showing a lot of promise, and in some cases we are now even able to detect and remove some harmful content automatically, without human review,” Erin Saltman, Facebook’s EMEA counter-terrorism policy manager, said in a statement to TIME. That’s the case, for example, with reuploads of known extremist content. But when algorithms flag content which has never been seen before, Facebook and YouTube both say it is always routed to a human moderator, who makes a final decision on whether it should be removed.
That’s because Facebook’s policies allow some types of violence and extremism but not others — meaning decisions on whether to take content down is often based on cultural context. Has a video of an execution been shared by its perpetrators to spread fear? Or by a citizen journalist to ensure the wider world sees a grave human rights violation? A moderator’s answer to those questions could mean that of two identical videos, one remains online and the other is taken down. “This technology can’t yet effectively handle everything that is against our rules,” Saltman said. “Many of the decisions we have to make are complex and involve decisions around intent and cultural nuance which still require human eye and judgement.”
In this balancing act, it’s Facebook’s army of human moderators — many of them outsourced contractors — who carry the pole. And sometimes, they lose their footing. After several of Eye on Alhasakah’s posts were flagged by algorithms and humans alike, a Facebook moderator wrongly decided the page should be banned entirely for sharing violent videos in order to praise them — a violation of Facebook’s rules on violence and extremism, which state that some content can remain online if it is newsworthy, but not if it encourages violence or valorizes terrorism. The nuance, Facebook representatives told TIME, is important for balancing freedom of speech with a safe environment for its users — and keeping Facebook on the right side of government regulations.
Facebook’s set of rules on the topic reads like a gory textbook on ethics: beheadings, decomposed bodies, throat-slitting and cannibalism are all classed as too graphic, and thus never allowed; neither is dismemberment — unless it’s being performed in a medical setting; nor burning people, unless they are practicing self-immolation as an act of political speech, which is protected. Moderators are given discretion, however, if violent content is clearly being shared to spread awareness of human rights abuses. “In these cases, depending on how graphic the content is, we may allow it, but we place a warning screen in front of the content and limit the visibility to people aged 18 or over,” said Saltman. “We know not everyone will agree with these policies and we respect that.”
But civilian journalists operating in the heat of a civil war don’t always have time to read the fine print. And conflict monitors say it’s not enough for Facebook and YouTube to make all the decisions themselves. “Like it or not, people are using these social media platforms as a place of permanent record,” says Woods. “The social media sites don’t get to choose what’s of value and importance.”
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo/London at billy.perrigo@time.com