Why Online Moderation Often Fails During a Conflict

6 minute read
Ideas
Iyer is the Managing Director at the USC Marshall School’s Neely Center. Prior to this role, he spent 4+ years leading data science, research and product teams across Meta toward improving the societal impact of social media.

We’ve all seen a proliferation of sensational, divisive, and false content online relating to the conflict between Israel and Hamas. In response, journalists, regulators, politicians and civil society groups have pushed social media companies to increase content moderation efforts, which involve finding and addressing individual instances of content that violate platform policies. Over 4+ years at Meta, I worked on improving the platform’s impact on numerous conflicts. Since leaving, I have focused on platform design, in part because the limits of content moderation are even more acute when the stakes are highest. Of course there exists content that should be moderated and certainly platforms could do better in those cases, but moderation will never affect more than a small fraction of harmful content and therefore focusing on it will not materially improve online discourse.


Here is an example of content from the current conflict that was highlighted as false in the New York Times.

In response posts like this one, church officials put out a statement saying that “St. Porphyrios Church in Gaza is untouched”, but that was only possible because the claim was specific and retroactive (several days later, the church was indeed hit by an air strike). More common in online discourse are more general statements such as posts suggesting the wide and indiscriminate use of phosphorus bombs or sexual assault by the other side. These kinds of general fear inducing statements are impossible to moderate effectively because:

  1. You can’t prove a general negative. How would you go about proving that something has never occurred, when no claim is made about any particular instance? Debunking is possible when a poster makes a specific claim or uses recycled images, but many people won’t do that.
  2. You don’t want to censor important content. These statements are potentially violations of human rights and important to document. People can also use such information to stay safe—if they can trust it. Mistakenly taking down such content could be a costly error.
  3. People are motivated to spread such content. In the context of a conflict, spreading inflammatory information about the other side is a tactic that all sides are naturally going to engage in. Such content will be posted often and spread quickly by people who want to support their side, leaving no time to get the facts correct before it has been widely read.
  4. People are eager for the latest information. This desire for the latest information means that people who have no information or opinion about the conflict are still going to be motivated to get attention, followers, and ad revenue, by competing to be first with the latest, most compelling information. They have little incentive to care about whether information is true, false or potentially bad for the world. If something gets taken down, they can simply try again, perhaps with a new account.

To be clear, I’m not saying that content moderation does not do some good, important work. But asking companies to try harder at moderation, as the EU has done for Meta and Twitter, will have a very limited effect, as most harmful content will not violate policies and will often co-occur with authentic political expression. Quoting this internal Facebook document, “significant tranches of harmful content, such as personal stories, leading questions, or derision, are often entirely unenforceable. As FB Integrity work matures and becomes more well-known, problematic actors pivot toward these gaps.” And in those gaps, platforms have acknowledged that content often gets rewarded with more distribution for approaching the line of misinformation and hate.

Read More: Hamas' New Social Media Strategy

In fact, when platforms rely on moderation, they often create new problems. We know from previous experience that increased enforcement on the Israel-Palestine conflict has led to credible reports of censorship and bias. When applied carefully to limited clear cases of harmful content (e.g. a misrepresented image), moderation is an important tool. But most cases of harmful content are not clear, and when moderation is expanded as an emergency measure to address a broader set of harmful content, where judgments require more nuance and subjectivity, mistakes are likely to be made.

That is why companies often rely on content neutral “break the glass” measures to address crises. We can learn from these efforts to suggest alternatives to a focus on content moderation that do not rely on rigid rules about which content is good or bad. Among the steps that can address misinformation more scalably are rate limiting all accounts to make it harder for small groups of people to manipulate the system, removing engagement incentives so that people don’t get more distribution for being more sensational, increasing privacy protections for authoritative sources discussing events online, and prioritizing accounts that have positive, trustworthy histories over brand new accounts that are created to monetize the moment. These methods have proven useful enough that many have asked why they are ever turned off and some have been adopted permanently. At USC’s Neely Center, we have gathered support for a “design code” to codify the best such practices that a wide set of stakeholders would request from online platforms.

Some amount of propaganda is inevitable in a conflict. The question is not its existence, which predates social media, but rather why it is so much more prevalent online than offline. What are the structural features of social media that cause this to occur? Should we be optimizing conflict news for comments, clicks, and reshares? Asking companies to enforce their policies more vigorously, as several politicians have done, without the accompanying design changes that critics such as Tristan Harris and Roger McNamee have called for, will lead to disappointing and potentially harmful results. Moderation is a tool to use after you have reduced the incentive to compete for attention. Rather than focusing on moderation, politicians should be pushing companies to make simple changes to the designs of their systems, to ensure that they are not incentivizing divisive misinformation that can make a bad situation even worse.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.