2020 Election

Under Scrutiny, Facebook and Twitter Face Their Biggest Test on Election Day

5 minute read

Four years after Russia leveraged social media to mount an extensive influence campaign in the 2016 U.S. election, what happens on Nov. 3 will test technology companies’ much-touted new safeguards to block both foreign and domestic actors from causing chaos on their platforms.

Facebook, Twitter, and other social media companies have spent months rolling out increasingly aggressive policies meant to curb the spread of misinformation, possible voter suppression and calls for violence. But all of this preparation is unlikely to matter unless they’re able to effectively enforce them in the rising tension of Election Day and its aftermath.

Given the unprecedented number of Americans who voted by mail, there are likely to be delays before the presidential winner and other down-ballot races are officially declared. Technology companies have spent a lot of time planning for a number of scenarios that could unfold in that gap: if a candidate prematurely declares victory, if misinformation fuels civil unrest, or if there are calls for violence.

Social media executives are clearly aware of the stakes. This election “will certainly be a test for Facebook,” CEO Mark Zuckerberg said last week on the company’s quarterly earnings call. “Election integrity is and will be an ongoing challenge…I’m worried that with our nation so divided, and election results potentially taking days or weeks to be finalized, there’s a risk of civil unrest across the country.”

Facebook announced and subsequently fine-tuned a number of policy changes in recent months. It has banned calls for coordinated interference at polling stations, including calls to bring weapons to these sites. It also said that it would ban posts that use “militarized language” meant to intimidate voters, including words like “army” or “battle,” vice president of content policy Monika Bickert told reporters in early October. This came after the president’s son, Don Jr., was featured in campaign videos calling for “every able-bodied man and woman to join Army for Trump’s election security operation” to “defend their ballots.” All of these new policies also apply to Instagram, which is owned by Facebook.

The company also said it would stop accepting new political ads a week before the election, and that it would ban all political ads indefinitely after Nov. 3. Since early October, Facebook has displayed voting information at the top of users’ feeds, including state-specific details for how to vote safely. Facebook has also discussed the possibility of taking more drastic emergency measures, including suppressing inflammatory posts as it has done in countries like Sri Lanka and Myanmar, according to the Wall Street Journal.

Twitter, which has long been President Trump’s top communication platform, also rolled out more specific policies ahead of the election. Political Twitter users with more than 100,000 followers will be subject to “additional warnings and restrictions” if they spread false information, by, for example, getting a warning label that requires followers to tap through another screen to read them.

The platform will label any tweets by political candidates who try to claim victory before the results have officially been called for both presidential and congressional races. Users will be directed to Twitter’s main election page, with the latest information from verified sources. It will also remove any tweets that encourage election interference or intimidation, as well as any that seem to incite violence, according to Twitter spokesperson Nicholas Pacilio.

Some of these changes apply to regular Twitter users as well. In the week before the election, Twitter added an extra step to retweet, or share, a post, asking users to add their thoughts or read the article before sharing, a move designed to slow down the spread of misinformation.

Similarly, YouTube has said it will remove any content meant to interfere with voting, for example any videos “telling viewers to create long voting lines with the purpose of making it harder for others to vote.” It says it will remove any “content that has been technically manipulated or doctored in a way that misleads users and may pose a serious risk of egregious harm,” as well as content that contains “hacked information about a political candidate shared with the intent to interfere in an election.”

Despite all of their new policies, these companies will be under enormous pressure to justify any move to restrict or remove political content. In the months before the election, Trump and Republican allies stepped up their attacks on “Big Tech,” who they have long accused of censoring conservative viewpoints, including holding several congressional hearings about their alleged bias. This escalated when Facebook and Twitter began to label—and even remove— some of the president’s own posts for violating their new guidelines on spreading election or COVID-19 misinformation.

“This is election interference at the hands of the Silicon Valley Mafia, and it is dangerous for our democracy,” Trump deputy national press secretary Samantha Zager said in a statement last week, after Facebook removed Trump campaign ads that violated its new policies. She has previously accused Twitter’s new guidelines of “attempting to silence voters and elected officials to influence our election.”

So far, however, the platforms have made it clear that they mean business. On the eve of the election, both Facebook and Twitter added a warning label on one of Trump’s posts which called the Supreme Court’s decision to allow Pennsylvania officials to count ballots postmarked by Election Day “a VERY dangerous one” which would “allow rampant and unchecked cheating.” Twitter marked it as “misleading,” and under its new guidelines prevented users from replying to or liking the tweet.

More Must-Reads From TIME

Write to Vera Bergengruen at vera.bergengruen@time.com