Some users are significantly more likely to see disturbing content on Facebook than others, according to internal company documents leaked by whistleblower Frances Haugen.
A 2019 report from Facebook’s Civic Integrity team details the results of a study that measured the impact of removing certain integrity protections from some users’ News Feeds. It found that 11% of users were seeing content marked as “disturbing,” 39% were seeing content that could be considered “hateful” and 32% were seeing “borderline” content containing some form of nudity every day on the platform.
This reporting is based on Haugen’s disclosures to the Securities and Exchange Commission (SEC), which were also provided to Congress in redacted form by her legal team. The redacted versions received by Congress were obtained by a consortium of news organizations, including TIME. Some publications have described them as the “Facebook Papers.”
Not all users are exposed to these types of disturbing posts in equal measure. Facebook’s own research indicates vulnerable communities, including Black, elderly and low-income users, are among the groups most harmed by the prevalence of disturbing content on the site. At the time of the 2019 integrity report, Facebook’s researchers were still defining what constituted disturbing.
“Reach of bad experiences remains a [work in progress] in Integrity and we are still finalizing the exact thresholds used to say that a user had a bad experience for some integrity concept,” the report read. “The definition used in this analysis is simply one [viewport view] on a piece of bad content, [or one time a piece of bad content was visible on a user’s screen]. So the exact values of this analysis should be taken with a grain of salt.”
In response to a request for comment on the 2019 “bad experiences” report, a Facebook spokesperson told TIME that the numbers described in the report only quantified the impact of rolling back demotions, an integrity intervention that reduces the distribution of likely violating content or content borderline to Facebook’s community standards, for the roughly 2% of users who were part of the testing group in the study.
“Testing product and safety features is an important part of improving our platform and something that’s standard in tech and many other industries,” the spokesperson said. “It helps us build the tools to reduce the prevalence of hate speech and other types of problematic content on our platform.”
However, the report notably states that, at the time of the study, the integrity protections under review were having “minimal impact” on some of the most “severe harms” to users.
“These are significant proportions of [daily active people] and we effectively have no ranking interventions in place to mitigate this,” the report reads. “Rather, our impact is limited to [objective content quality offenses like clickbait and ad farms] and misinformation.”
Read more: Why Working at Facebook Is Like Playing Chess with an Alien, According to Leaked Documents
Facebook’s spokesperson also pointed TIME to Facebook’s most recent Community Standards Enforcement Report, which included reports on violence and graphic content, adult nudity and sexual activity, and hate speech. The reports showed that the prevalence of these issues in Q2 2021 were about 0.03-0.04%, 0.04% and 0.05% of content viewed, or about three to four, four and five views per every 10,000 on Facebook, respectively. The spokesperson noted that Facebook sees “prevalence” as the most important metric because it represents not what the company caught but what it missed and people saw on the platform.
One document included in the Facebook Papers references studies assessing digital literacy that have shown that Facebook users with lower digital literacy skills are both overly-exposed to certain harms like graphic violence and borderline nudity, and are less able to decipher the veracity of posts. Defined in another document as users who are “less able to use features designed to help them ‘see less’ of things they dislike,” low-literacy users make up a significant portion of the total Facebook population.
“This creates harm when users are forced to see content that upsets them with no agency to react,” the document reads.
A survey of 67,000 Facebook users across 17 countries conducted by the Digital Literacy Working group found that at least one fourth and up to one third of the global Facebook population lacks key digital skills while at least one sixth of the U.S. Facebook population has medium or low digital skills.
While high-literacy users are able to signal that they don’t want to see more of certain types of content through hiding, snoozing or reporting offending posts, Facebook found that low-literacy users are less able to flag content they don’t like and to “unsubscribe” from low-quality sources.
The problem is compounded by the fact that when disturbing posts are not flagged and removed by Facebook’s automated systems, the harm falls disproportionately on low-digital-literacy users who are less able to filter out that content.
In an October 17 installment of its “Facebook Files” series of investigative reports on the Facebook Papers, the Wall Street Journal detailed the extent to which Facebook’s artificial-intelligence systems are tripped up by various forms of disturbing content.
“Facebook’s AI can’t consistently identify first-person shooting videos, racist rants and even, in one notable episode that puzzled internal researchers for weeks, the difference between cockfighting and car crashes,” the report reads. “On hate speech, the documents show, Facebook employees have estimated the company removes only a sliver of the posts that violate its rules—a low-single-digit percent, they say.”
Unlike violence and nudity, Facebook’s research suggests that hate speech and other types of hateful content are more of an issue among users with higher digital literacy skills who then use those skills to seek out more of the same. Hate speech in particular was flagged by Haugen during her Oct. 25 testimony to British lawmakers as a problem in two ways. Not only is such speech exponentially more pervasive and harmful among certain subpopulations of users, but it also is not curbed by Facebook’s algorithms. The disparate impact on certain groups of users becomes invisible, however, when only aggregate data is reported.
“When we say, how does the algorithm work, we need to be thinking about what is the experience of the algorithm for lots of individual populations,” she said. “When Facebook gives you aggregate data, it will likely hide how dangerous the systems are because the experience of the 95th percentile for every single integrity harm is radically different and the 99th percentile is even more radically different than the median experience. And I want to be really clear: the people who go and commit acts of violence, those are people who get hyper-exposed to this dangerous content. And so we need to be able to break out by those extreme experiences.”
Haugen says that Facebook removes less than 5% of hate speech from the platform.
Read more: Facebook Let an Islamophobic Conspiracy Theory Flourish in India Despite Employees’ Warnings
Regarding hate speech, Facebook’s spokesperson directed TIME to an October 17 Facebook Newsroom post from Facebook’s vice president for Integrity Guy Rosen reemphasizing that the company views “prevalence”—the percentage of viewed content that consists of hate speech—as the most meaningful hate speech metric. Facebook says it uses technology systems to help proactively detect hate speech, route it to reviewers and remove it when it violates Facebook’s policies, thereby reducing the prevalence of hate speech on the platform.
“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Rosen wrote. “This is not true. We don’t want to see hate on our platform, nor do our users or advertisers, and we are transparent about our work to remove it. What these documents demonstrate is that our integrity work is a multi-year journey. While we will never be perfect, our teams continually work to develop our systems, identify issues and build solutions.”
Correction, November 3
The original version of this story misstated the amount of hate speech Facebook claims it removes. The company does not claim to remove 90% of all hate speech on its platform. It has said that it removed 31.5 million instances of hate speech in the second quarter of 2021, but did not say what percentage of total hate speech that figure represented.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Megan McCluskey at megan.mccluskey@time.com