If there is one thing that social media companies, political campaigns and all of their critics agree on, it is that widespread uncertainty and confusion is all but inevitable on Nov. 3. With likely delays in counting due to an unprecedented number of mail-in ballots and the suspension of most traditional campaign events due to the ongoing pandemic, social media platforms are bracing themselves to handle the dissemination of news on Election Day and its aftermath, all of which will largely play out online.
In recent weeks, Facebook, Twitter and YouTube, as well as less politics-focused platforms like TikTok and Pinterest, have all released new policies on how to stem the spread of election and voting disinformation, such as removing or labeling false voting information or claims of election rigging. Now they are grappling with how to enforce those new measures if the results of the election are unclear for a prolonged period or contested.
The range of platforms’ contingency plans includes what to do if a candidate prematurely declares victory before the results have been made official to how to stop videos calling into question the legitimacy of the election from going viral. In an indication of how starkly Twitter sees the potential impact, the company has said it will take action on tweets “inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession” – a jarring line to read about an American election.
After 2016, many of these platforms spent the last four years learning to quickly detect and remove content from foreign actors, but the mass spread of domestic disinformation presents a new and more difficult challenge. This is especially true when it comes to the dilemma of how to handle posts by President Donald Trump and his allies, who for months have used their social media accounts to spread the very kind of misinformation about voter fraud and election rigging that this raft of new policies is designed to prevent. Taking down social media posts on Nov. 3 won’t stop the spread of false claims or diffuse tensions if the very process of the election has been called into question for months by the commander-in-chief.
“Within that context, whatever social media platforms do is null and void. The election results will be confused — that’s just a foregone conclusion,” says Graham Brookie, the director of the Atlantic Council’s Digital Forensic Research Lab, which tracks misinformation.
“Much like we won’t know the results on election night, we need to stop thinking of the election as ‘Election Day.’ It’s already happening,” Brookie says. He says platforms need to focus on building up Americans’ trust in official, non-partisan sources that they can go to for reliable information. “If the U.S. is depending on private social media platforms to reliably communicate the results of the election, then we’re in bad shape. I cannot communicate how f-cked we are.”
This year, social media companies have become increasingly willing to moderate more content not because of the election, but because of the rampant spread of public health disinformation by politicians and partisan outlets during the pandemic. Their efforts to stem the barrage of dangerous COVID-19 conspiracies show both the promise and the limits of these new policies, experts say. In August, Facebook, Twitter and YouTube all succeeded in stopping the sequel to the conspiracy video “Plandemic” from going viral. According to Facebook, it removed more than 7 million posts spreading misinformation about COVID-19 from its main site and Instagram between April and June. They also attached warning labels to a staggering 98 million posts that were judged to be misleading about the virus, but less harmful.
Despite these efforts, large amounts of misinformation still stayed up long enough to spread beyond their control. Right-wing news website Breitbart posted a video in which a group of white-clad people calling themselves “America’s Frontline Doctors” claimed that “you don’t need a mask” to protect yourself from COVID-19, and that the anti-malaria drug hydroxychloroquine is “a cure for COVID.” It quickly racked up a staggering 20 million views on Facebook before it was taken down. It was amplified in several tweets by Trump and prominent supporters. Even after it was taken down, clips of it continued to circulate via WhatsApp and other messaging platforms – a preview of what is likely to happen with disinformation in the aftermath of the election, according to analysts.
Social media companies will need to act more quickly when it comes to handling bad information about election results, says Carly Miller, a research analyst at the Stanford Internet Observatory who has been tracking how different social media platforms are addressing election misinformation. “The next step is to enforce these policies in a clear, transparent, and timely manner, which we have seen really makes a difference in preventing the spread of election-related misinformation,” she says.
“On the night of the election, the devil will be in the details,” says Brookie. “It will depend on how strongly and quickly they enforce these policies.”
Here’s what five of the top social media platforms are doing to prepare for the next two months:
In a lengthy post on Sept. 3, Facebook CEO Mark Zuckerberg said he worried that “with our nation so divided and election results potentially taking days or even weeks to be finalized, there could be an increased risk of civil unrest across the country.”
“This election is not going to be business as usual,” he wrote. “It’s important that we prepare for this possibility in advance and understand that there could be a period of intense claims and counter-claims as the final results are counted. This could be a very heated period.”
That same day, Facebook, which is the world’s largest social media platform with roughly 175 million users in the U.S. alone, announced a series of election-related initiatives. It said it will prohibit new political ads in the week leading up to Nov. 3, though those placed earlier can continue running. It is also applying warning labels to posts that seek to undermine the outcome or legitimacy of the election, or allege that legal voting methods led to fraud. If any candidate or campaign tries to declare victory before the final results are in, they will add a label directing users to official information from Reuters or the National Election Pool, a consortium of U.S. television news networks.
The platform had already been working on the problem during the presidential and state primaries earlier this year. From March to July, Facebook removed more than 110,000 pieces of content from Facebook and Instagram in the U.S. for violating the company’s voter interference policies, spokeswoman Katie Derkits told TIME. These policies are meant to prevent any efforts at voter suppression by spreading inaccurate information about how, where, and when to vote. From March to May 2020, Facebook also displayed warnings on more than 50 million pieces of content on the platform. Nearly 95% of people who saw these warning labels did not click through to see the original post, according to the company, which plans to continue these policies through November’s election.
Facebook does not fact-check misinformation in politicians’ posts or ads, unlike Twitter, which flags false claims. Zuckerberg has defended this move, saying users should hear directly from politicians and that they don’t want to stifle free speech. But in the face of Trump’s repeated allegations that the election is already rigged, the company has reportedly been exploring its options if the President refuses to accept the results of the election, questions its validity or claims that the Postal Service somehow interfered with mail-in ballots.
Twitter similarly updated its “civic integrity policy” in early September to lay out a game plan for the election – including that it will go as far as taking down posts from its platform. The company says it will not only remove or attach a warning label to any claims of victory prior to election results being official, but also take action on any tweets “inciting unlawful conduct to prevent a peaceful transfer of power or orderly succession.”
When deciding whether to remove or label these posts, Twitter will consider whether the content falls into the category of the “most specific falsehoods and the propensity for the greatest harm,” or “simply a mischaracterization” that could be labeled, spokesperson Trenton Kennedy told TIME. In the latter case, only users who follow the account in question will see the tweet shared to their timeline, complete with a tag warning the information is disputed and a link to an official source. The company’s algorithm also won’t promote it to others, even if it’s a top conversation.
Twitter says it will also act on any claims that might cast doubt on voting, including “unverified information about election rigging, ballot tampering, vote tallying, or certification of election results.” This policy has already been thoroughly tested by Trump, who uses the platform as his primary means of communication and has more than 85 million followers. In recent months, the company has attached warning labels to several of his tweets for spreading misleading information about mail-in ballots, for sharing a manipulated video, and for inciting violence.
YOUTUBE
YouTube rolled out plans to remove misleading election and voting content back in February, on the day of the Iowa caucuses. The video-sharing platform said it would remove posts that promoted fake information about voting days and locations, lies about the candidates’ eligibility, and videos manipulated by artificial intelligence. It will enforce these policies “consistently, without regard to a video’s political viewpoint,” the company’s VP of Government Affairs & Public Policy Leslie Miller insisted in a blog post.
But the task is daunting. Around three-quarters of U.S. adults use YouTube, according to a 2019 Pew survey, and more than 500 hours of video are uploaded to the site every minute. Thousands of popular YouTube personalities live-stream on the site, often mixing politics or misinformation into the rest of their content. The platform has long had a problem with its recommendations algorithm, which experts and critics say pushes users towards more extreme content and rewards problematic videos.
In August, YouTube pledged that it would elevate “authoritative voices” before and during the election – for example, it will recommend content from official verified sources in the “watch next” column and in searches about the election or the candidates. On Election Night, it will give users previews of verified news articles in their search results “along with a reminder that developing news can rapidly change,” according to Miller. It has previously seen some success with this method during breaking news events. In 2019, the consumption of content from official “authoritative news partners,” which include CNN, The Guardian and Fox News, grew by 60%, it says.
TIKTOK
TikTok, the massively popular short-form video app owned by a Chinese tech firm that has been caught up in a recent battle over national security concerns, has also rolled out new policies for its more than 100 million U.S. users. In August, the company announced new measures to “combat misinformation, disinformation, and other content that may be designed to disrupt the 2020 election.”
These include expanded fact-checking partnerships to verify election-related news, and adding an option for users to report election misinformation. TikTok says it’s working with experts, including the Department of Homeland Security, to guard against influence campaigns by foreign actors. It has also partnered with popular creators to make video series about media literacy and misinformation with names like “Question the Source,” “Fact vs. Opinion” and “When to Share Vs When to Report.”
While it’s the least political of the big social media apps, Tiktok has had its own recent brushes with misinformation. In June, a group of TikTok users took credit for inflating expectations for a massive Trump rally in Tulsa, Oklahoma by encouraging thousands of users to register and then not show up. The Trump campaign, which had touted more than 1 million RSVPs for the rally, did not fill the 19,000 seat capacity at the arena.
The company has also focused on blocking “synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm.” In August, for instance, it removed a fake video that had been manipulated to imply that House Speaker Nancy Pelosi was drunk or drugged, which was viewed on the platform more than 40,000 times. Facebook, by contrast, attached a warning label to the video, but allowed the video to stay up, racking up millions of views.
Pinterest, the image sharing social platform, similarly started rolling out policies surrounding COVID-19 misinformation earlier this year and then updated them for election. On Sept. 3, it added a “civic participation misinformation” section to its community guidelines, saying it would remove or limit posts with false or misleading content about how and where to vote, false claims about ballots, threats against voting locations or officials and “content apparently intended to delegitimize election results on the basis of false or misleading claims.”
The company, which banned political advertising in 2018, also said it would no longer show ads to users searching for elections-related terms or the names of candidates.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Vera Bergengruen at vera.bergengruen@time.com