November 10, 2021 8:36 PM EST

In May 2019, a video purporting to show House Speaker Nancy Pelosi inebriated, slurring her words as she gave a speech at a public event, went viral on Facebook. In reality, somebody had slowed the footage down to 75% of its original speed.

On one Facebook page alone, the doctored video received more than 3 million views and 48,000 shares. Within hours it had been reuploaded to different pages and groups, and spread to other social media platforms. In thousands of Facebook comments on pro-Trump and rightwing pages sharing the video, users called Pelosi “demented,” “messed up” and “an embarrassment.”

Two days after the video was first uploaded, and following angry calls from Pelosi’s team, Facebook CEO Mark Zuckerberg made the final call: the video did not break his site’s rules against disinformation or deepfakes, and therefore it would not be taken down. At the time, Facebook said it would instead demote the video in people’s feeds.

Inside Facebook, employees soon discovered that the page that shared the video of Pelosi was a prime example of a type of platform manipulation that had been allowing misinformation to spread unchecked. The page—and others like it—had built up a large audience not by posting original content, but by taking content from other sources around the web that had already gone viral. Once audiences had been established, nefarious pages would often pivot to posting misinformation or financial scams to their many viewers. The tactic was similar to how the Internet Research Agency (IRA), the Russian troll farm that had meddled in the 2016 U.S. election, spread disinformation to American Facebook users. Facebook employees gave the tactic a name: “manufactured virality.”

In April 2020, a team at Facebook working on “soft actions”—solutions that stop short of removing problematic content—presented Zuckerberg with a plan to reduce the reach of pages that pursued “manufactured virality” as a tactic. The plan would down-rank these pages, making it less likely that users would see their posts in the News Feed. It would impact the pages that shared the doctored video of Pelosi, employees specifically pointed out in their presentation to Zuckerberg. They also suggested it could significantly reduce misinformation posted by pages on the platform since the pages accounted for 64% of page-related misinformation views but only 19% of total page-related views.

But in response to feedback given by Zuckerberg during the meeting, the employees “deprioritized” that line of work in order to focus on projects with a “clearer integrity impact,” internal company documents show.

This story is partially based on whistleblower Frances Haugen’s disclosures to the U.S. Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions were seen by a consortium of news organizations, including TIME. Many of the documents were first reported by the Wall Street Journal. They paint a picture of a company obsessed with boosting user engagement, even as its efforts to do so incentivized divisive, angry and sensational content. They also show how the company often turned a blind eye to warnings from its own researchers about how it was contributing to societal harms.

A pitch to Zuckerberg with few visible downsides

Manufactured virality is a tactic that has been used frequently by bad actors to game the platform, according to Jeff Allen, the co-founder of the Integrity Institute and a former Facebook data scientist who worked closely on manufactured virality before he left the company in 2019. This includes a range of groups, from teenagers in Macedonia who found that targeting hyper-partisan U.S. audiences in 2016 was a lucrative business, to covert influence operations by foreign governments including the Kremlin. “Aggregating content that previously went viral is a strategy that all sorts of bad actors have used to build large audiences on platforms,” Allen told TIME. “The IRA did it, the financially motivated troll farms in the Balkans did it, and it’s not just a U.S. problem. It’s a tactic used across the world by actors who want to target various communities for their own financial or political gain.”

Read More: Why Some People See More Disturbing Content on Facebook Than Others, According to Leaked Documents

In the April 2020 meeting, Facebook employees working in the platform’s “integrity” division, which focuses on safety, presented a raft of suggestions to Zuckerberg about how to reduce the virality of harmful content on the platform. Several of the suggestions—titled “Big ideas to reduce prevalence of bad content”—had already been launched; some were still the subjects of experiments being run on the platform by Facebook researchers. Others —including tackling “manufactured virality”—were early concepts that employees were seeking approval from Zuckerberg to explore in more detail.

The employees noted that much “manufactured virality” content was already against Facebook’s rules. The problem, they said, was that the company inconsistently enforced those rules. “We already have a policy against pages that [pursue manufactured virality],” they wrote. “But [we] don’t consistently enforce on this policy today.”

The employees’ presentation said that further research was needed to determine the “integrity impact” of taking action against manufactured virality. But they pointed out that the tactic disproportionately contributed to the platform’s misinformation problem. They had compiled statistics showing that nearly two-thirds of page-related misinformation came from “manufactured virality” pages, compared to less than one fifth of total page-related views.

Acting against “manufactured virality” would bring few business risks, the employees added. Doing so would not reduce the number of times users logged into Facebook per day, nor the number of “likes” that they gave to other pieces of content, the presentation noted. Neither would cracking down on such content impact freedom of speech, the presentation said, since only reshares of unoriginal content—not speech—would be affected.

But Zuckerberg appeared to discourage further research. After presenting the suggestion to the CEO, employees posted an account of the meeting on Facebook’s internal employee forum, Workplace. In the post, they said that based on Zuckerberg’s feedback they would now be “deprioritizing” the plans to reduce manufactured virality, “in favor of projects that have a clearer integrity impact.” Zuckerberg approved several of the other suggestions that the team presented in the same meeting, including “personalized demotions,” or demoting content for users based on their feedback.

Read More: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Andy Stone, a Facebook spokesperson, rejected suggestions that employees were discouraged from researching manufactured virality. “Researchers pursued this and, while initial results didn’t demonstrate a significant impact, they were free to continue to explore it,” Stone wrote in a statement to TIME. He said the company had nevertheless contributed significant resources to reducing bad content, including down-ranking. “These working documents from years ago show our efforts to understand these issues and don’t reflect the product and policy solutions we’ve implemented since,” he wrote. “We recently published our Content Distribution Guidelines that describe the kinds of content whose distribution we reduce in News Feed. And we’ve spent years standing up teams, developing policies and collaborating with industry peers to disrupt coordinated attempts by foreign and domestic inauthentic groups to abuse our platform.”

But even today, pages that share unoriginal viral content in order to boost engagement and drive traffic to questionable websites are still some of the most popular on the entire platform, according to a report released by Facebook in August.

Allen, the former Facebook data scientist, says Facebook and other platforms should be focused on tackling manufactured virality, because it’s a powerful way to make platforms more resilient against abuse. “Platforms need to ensure that building up large audiences in a community should require genuine work and provide genuine value for the community,” he says. “Platforms leave them themselves vulnerable and exploitable by bad actors across the globe if they allow large audiences to be built up by the extremely low-effort practice of scraping and reposting content that previously went viral.”

The internal Facebook documents show that some researchers noted that cracking down on “manufactured virality” might reduce Meaningful Social Interactions (MSI)—a statistic that Facebook began using in 2018 to help rank its News Feed. The algorithm change was meant to show users more content from their friends and family, and less from politicians and news outlets. But an internal analysis from 2018 titled “Does Facebook reward outrage” reported that the more negative comments a Facebook post elicited​​—content like the altered Pelosi video—the more likely the link in the post was to be clicked by users. “The mechanics of our platform are not neutral,” one Facebook employee wrote at the time. Since the content with more engagement was placed more highly in users’ feeds, it created a feedback loop that incentivized the posts that drew the most outrage. “Anger and hate is the easiest way to grow on Facebook,” Haugen told the British Parliament on Oct. 25.

How “manufactured virality” led to trouble in Washington

Zuckerberg’s decision in May 2019 not to remove the doctored video of Pelosi seemed to mark a turning point for many Democratic lawmakers fed up with the company’s larger failure to stem misinformation. At the time, it led Pelosi—one of the most powerful members of Congress, who represents the company’s home state of California—to deliver an unusually scathing rebuke. She blasted Facebook as “willing enablers” of political disinformation and interference, a criticism increasingly echoed by many other lawmakers. Facebook defended its decision, saying that they had “dramatically reduced the distribution of that content” as soon as its fact-checking partners flagged the video for misinformation.

Pelosi’s office did not respond to TIME’s request for comment on this story.

The circumstances surrounding the Pelosi video exemplify how Facebook’s pledge to show political disinformation to fewer users only after third-party fact-checkers flag it as misleading or manipulated—a process that can take hours or even days—does little to stop this content from going viral immediately after it is posted.

In the lead-up to the 2020 election, after Zuckerberg discouraged employees from tackling manufactured virality, hyper-partisan sites used the tactic as a winning formula to drive engagement to their pages. In August 2020, another doctored video falsely claiming to show Pelosi inebriated again went viral. Pro-Trump and rightwing Facebook pages shared thousands of similar posts, from doctored videos meant to make then-candidate Joe Biden appear lost or confused while speaking at events, to edited videos claiming to show voter fraud.

In the aftermath of the election, the same network of pages that had built up millions of followers between them using manufactured virality tactics used the reach they had built to spread the lie that the election had been stolen.

More Must-Read Stories From TIME

Write to Billy Perrigo at billy.perrigo@time.com.

Read More From TIME
You May Also Like
EDIT POST