• Tech
  • Social Media

Meta, TikTok, and Twitter Hope to Fight Election Misinformation. Experts Say Their Plans Aren’t Enough

8 minute read

With less than three months from the U.S. midterm elections, social media companies are preparing for another intense fight against misinformation. TikTok, Meta, and Twitter say they’re ready for the challenge this time, and they’ve each drafted a similar playbook that includes more fact-checking, labeling misinformation more carefully and adding more restrictions on political advertising.

But experts who study social media and politics point out that these new policies aren’t that different from those in place in 2020—which could have consequences, since video content may play a larger role in the spread of misinformation this year.

“They say they tested these processes during the 2018 midterms and 2020 [general election] and that they have it under control, but this is anything but the case,” says Jennifer Grygiel, an associate professor at Syracuse University who researches social media. “We don’t exist in a media environment today that looks like anything we’ve known, and we need to talk about that more.”

Video-based platforms like TikTok and Instagram Reels could be especially problematic, says Zeve Sanderson, the founding executive director of NYU’s Center for Social Media and Politics. As they rise in popularity, media experts fear that the automated moderation systems in place could find challenges in monitoring videos that contain misinformation.

“No election that we’ve seen in the last decade has been like the one before,” he says. “​​We need to remain vigilant across this online ecosystem.”

Ever since the 2016 presidential election, which saw social media platforms be used to spread misleading information, social media companies have launched expensive campaigns to educate the public about unreliable content and invested more in moderation policies, only to find a number of harmful posts falling through the cracks.

Here’s what major social media platforms are doing this time to stop election misinformation.

TikTok launches ‘election center’

The video platform launched an election center where users can get “authoritative information” about voting sites and election results—in 45 languages. All election-related posts will have clickable labels that will allow viewers to get information about the elections in their state, including on videos from the government, politicians and political parties.

This may be the first election cycle that TikTok has the time—and experience—to combat the spread of misinformation that garnered so much attention during the 2020 election, when the platform first took off. Once known for its catchy dances, TikTok has begun to see more posts involving political and social misinformation, according to Grygiel. TikTok accounts in Germany impersonated prominent political figures during the country’s last national election. Misleading posts in Colombia falsely attributed quotes from a candidate to a cartoon villain, and videos in the Philippines amplified myths about the country’s former dictator, whose son became president.

That degree of influence means TikTok, which is owned by Chinese tech giant ByteDance, could play a major role in the spread of misleading information if left unchecked. For example, six hashtags promoting conspiracy theories about the 2020 election garnered more than 38 million views as of July, though the company has blocked some hashtags including #StopTheSteal.

But video content is more difficult to moderate than text for the automated systems that platforms use. “We haven’t seen platforms respond to the ascendancy of video content,” Sanderson says. “They’re going to run into similar issues—human labeling isn’t going to be enough here.”

TikTok banned paid political advertising in 2019 (unlike Meta, which only paused it around election time.) But some influencers have found loopholes, posting sponsored content in exchange for other favors instead of money. Some campaigns and groups, including President Joe Biden’s team, collaborated with TikTok influencers in 2020 and beyond.

The company also says it will lean heavily on fact-checkers, preventing videos from showing up in users’ recommended videos while they’re still being reviewed. Videos that can’t be verified will be labeled as such and users will see a warning before sharing it.

But concerns remain because videos can make a big impact. “Video and images are much more powerful in terms of persuading people or making them have emotional reactions,” says Shannon McGregor, an assistant professor of journalism and media at the University of North Carolina. “It’s something that we need to be paying a lot more attention to.”

Meta bans new election week political ads

Just like in 2020, Facebook’s parent company Meta is implementing a “restriction period” for new ads about social issues, elections or politics in the United States. The advertising ban will be in effect between Nov. 1 and Nov. 8—the week leading up to General Election Day—and will prohibit any new ads relating to these topics from being published. Advertisers will also be restricted from editing their ads after publication.

Meta took a very similar approach around the time of the 2020 general election, although that self-imposed ban of political ads remained active for five months instead of just one week. The company’s decision to once again impose a political advertising ban marks a significant development in digital politics, cutting off what campaign strategists describe as a massive pipeline for reaching potential supporters who fuel fundraising.

“Our rationale for this restriction period remains the same as 2020,” Meta Global Affairs President Nick Clegg said in a statement. “In the final days of an election, we recognize there may not be enough time to contest new claims made in ads. This restriction period will lift the day after the election and we have no plans to extend it.”

Meta says it also has hundreds of people across more than 40 teams working on the midterms, including 10 independent fact-checking partners in the U.S. to address viral misinformation, half of which will be covering content in Spanish. The company also says it will partner with Univision and Telemundo to launch fact-checking services on WhatsApp, the private messaging service owned by Meta.

If election-related content contains misinformation about voting procedures and calls for violence related to election results, it will be removed across all of its platforms. The company will also reject ads that encourage people not to vote or which call into question the legitimacy of the upcoming election. These practices were also in place during the 2020 election cycle.

Twitter to moderate fake and misleading election info

Like other social media platforms, Twitter will label misleading information about elections and civic events and provide links to more credible information. The company says tweets that fall into this category will not be recommended or amplified by the platform, and users will be shown a warning prompt before liking or sharing it.

This isn’t a new approach for Twitter, as the company rolled out redesigned misinformation warning labels last year that increased click through rates by 17%, the company found. Tweets labeled with the new design also saw notable decreases in engagement, down 13% in replies, 10% in retweets and 15% in likes.

Twitter will also bring back its “prebunks” from 2020—messages that appear at the top of users’ feeds to debunk misinformation. The platform has already begun rolling out new state-specific event hubs during primaries that will provide real-time election information, and now plans to create a nationally focused event page available to everyone in the U.S.

The challenges ahead

Analysts who study politics and social media fear that misinformation could be a more serious issue in this election than in the past because the stakes are so high. McGregor says: “There already is more misinformation, and things will get worse as we get closer to election day.”

All 435 House seats and 34 of the 100 Senate seats are on the ballot this year, as Democrats hope to maintain their narrow control of both chambers. Some strategies, like adding warning labels in multiple languages and creating hubs of election information could be effective, but they might miss the mark on fundamental issues.

Sanderson says that, when it came to the Jan. 6 attack on the U.S. Capitol building, the problem was that platforms mobilized people to act. And it was done secretively. “How do you moderate closed groups?” Sanderson asks. “Far too many of the social platform announcements focus on content, and don’t really discuss the potential connection between content and mobilization.”

Individual politicians’ accounts are also a source of misinformation, McGregor says, a problem that “none of these policies address.”

Many analysts also worry that social media platforms may have been devoting fewer resources to the misinformation fight over the last year, only making it a priority during major elections. As Facebook increases its focus on the metaverse, the company reduced the number of employees working on election integrity from 300 to 60 earlier this year, according to The New York Times.

And some of their policies weren’t in place year-round, which means that misinformation is already circulating. “It seems like Facebook took a break,” McGregor says. “If the policies are not enforced all the time, then they’re not going to be successful.”

But for all the bad, there’s plenty of good. For one thing, McGregor says, “social media is great at bringing in new voters.”

More Must-Reads From TIME

Write to Nik Popli at nik.popli@time.com