The 5 Most Important Revelations From the ‘Facebook Papers’

8 minute read
Updated: | Originally published:

A more complete portrait of how Facebook was vividly aware of its harmful effects came to light Monday, both at Frances Haugen’s testimony in front of the British Parliament and via a series of reports based on internal documents she leaked, deemed “The Facebook Papers.” During the 2.5 hour question-and-answer, Haugen repeatedly said that Facebook puts “growth over safety,” particularly in developing areas of the world where the company does not have language or cultural expertise to regulate content without fostering division among users.

Here are the most explosive revelations from her Oct. 25 testimony and the internal documents.

Facebook fails to moderate harmful content in developing countries

The documents, confirmed by multiple news outlets, reveal that problems with hate speech and disinformation are dramatically worse in the developing world, where content moderation is often weaker. In India, Facebook reportedly did not have enough resources or expertise in the country’s 22 officially recognized languages, leaving the company unable to grapple with a rise in anti-Muslim posts and fake accounts tied to the country’s ruling party and opposition figures. According to one document, 87% of Facebook’s global budget for time spent on classifying misinformation goes towards the United States, while 13% is set aside for the rest of the world — despite the fact that North American users make up just 10% of its daily users.

When a pair of Facebook researchers created a dummy account in 2019 to test how users in Kerala, India, experienced the social media site, they found a staggering amount of hate speech, misinformation and calls for violence on the platform. “I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the researcher wrote. With 340 million Facebook users, India is the company’s largest market.

The studies by Facebook employees reveal that the company moves into countries without fully understanding its potential impact on local communities, particularly with regard to culture and politics, and then fails to provide adequate resources to mitigate those effects.

During her testimony Monday, Haugen said that Facebook has a “strategy” of only slowing down harmful content when “the crisis has begun,” deploying its “glass break measures” instead of making the platform “safer as it happens.” She referred to the ongoing ethnic violence in Ethiopia and Myanmar as the “opening chapters of a novel that is going to be horrific to read.”

Facebook AI struggles with non-English languages

Facebook’s algorithm mistakenly banned a hashtag referencing the Al-Aqsa Mosque in Jerusalem’s Old City because it thought it represented the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party, documents show. The company later apologized, but the accidental removal of content serves as a shining example for how the social media giant’s algorithms can stifle political speech due to language barriers and a lack of resources outside of North America.

“Facebook says things like, ‘we support 50 languages,’ when in reality, most of those languages get a tiny fraction of the safety systems that English gets,” Haugen told British lawmakers. “UK English is sufficiently different that I would be unsurprised if the safety systems that they developed primarily for American English were actually [under-enforced] in the UK.”

The company has long relied on artificial-intelligence systems, in combination with human review, as a way of removing dangerous content from its platforms. But languages spoken outside of North America and Europe have made Facebook’s automated content moderation much more difficult.

A document showed that in 2020, the company did not have screening algorithms to find misinformation in Burmese, the language of Myanmar, or hate speech in the Ethiopian languages of Oromo or Amharic.

CEO Mark Zuckerberg said during a quarterly earnings call Monday evening that 90% of flagged content is found by its artificial intelligence system — a significant increase over the last two years. But he noted that the results have varied by category.

“Some of the categories, like hate speech, have been harder,” he said, because “we’re operating in around 150 languages around the world … there’s a lot of cultural nuance in this.”

Facebook labeled election misinformation as “harmful, non-violating” content

Internal documents confirmed by multiple news outlets show that Facebook employees repeatedly raised red flags about misinformation and inflammatory content on the platform during the 2020 presidential election, but company leaders did little to address the issues. Posts alleging election fraud were labeled by the company as “harmful” but “non-violating” content — a problematic category that also includes conspiracy theories and vaccine hesitancy.

The false narratives about election fraud fell into that group of content, which under Facebook policies does not violate any rules. It’s a gray area that allows users to spread claims about a stolen election without crossing any lines that would warrant content moderation.

Although Facebook banned the Stop the Steal group on Nov. 5 for falsely casting doubt on the legitimacy of the election and calling for violence, the group had already amassed more than 360,000 members — and new Facebook groups filled with misinformation began popping up daily. Trump loyalists and right-wing conspiracists seemingly outwitted the social network.

Many employees voiced their concerns on internal message boards as they watched the violent mob attack the U.S. Capitol on Jan. 6 — some writing that they wanted to quit working for the company because leaders failed to heed to their warnings.

While the documents provide insight into Facebook’s awareness of election misinformation, they do not reveal inside information behind the company’s decision-making process to label election misinformation as non-violating content.

Read more: Frances Haugen Is Talking to European Lawmakers. That’s a Big Problem for Facebook

Facebook was aware that maids were being sold on its platform

Internal documents show that Facebook admitted it was “under-enforcing on confirmed abusive activity” when it failed to take action after Filipina maids complained of being abused and sold on the platform, according to the Associated Press.

“In our investigation, domestic workers frequently complained to their recruitment agencies of being locked in their homes, starved, forced to extend their contracts indefinitely, unpaid, and repeatedly sold to other employers without their consent,” one Facebook document read, the AP reports. “In response, agencies commonly told them to be more agreeable.”

Apple threatened to remove Facebook and Instagram from its app store over the issue, but changed course after the social media giant removed 1,000 accounts linked to the sale of maids from its platform. Although Facebook has since cracked down on the sale of maids, human rights activists have noted that images of maids with their age and price can still be found on the platform.

Facebook internally debated removing the Like button

In 2019, Facebook examined how people would interact with content if it no longer had a Like feature on Instagram, suggesting that the company was aware that this feature could have a negative impact on well-being. According to documents, the Like button had sometimes caused the platform’s youngest users “stress and anxiety” if the posts didn’t get many likes from friends — but when hidden, users interacted less with posts and ads, and it failed to alleviate social anxiety as they thought it might.

When asked why Facebook hasn’t made Instagram safer for children, Haugen said during her testimony that the company knows “young users are the future of the platform and the earlier they get them the more likely they’ll get them hooked.”

Facebook’s own research, based on the leaked documents, seems to confirm this. An internal report and presentation showed that “time spent” on Facebook by U.S. teenagers was down 16% year-over-year, and that young adults in the U.S. were spending 5% less time on the platform. Young adults were also found to take much longer to join Facebook than they had in the past, with the overall number of new teen signups declining.

Zuckerberg lightly addressed Facebook’s effort to target younger demographics during his call with investors Monday, saying the company is focused on providing the “best services for young adults,” which he defined as ages 18 to 29.

“We are retooling our team to make serving the young adults their North Star rather than optimizing for the larger number of older people,” he said. “This shift will take years, not months, to fully execute, and I think it’s the right approach to building our community and company for the long term.”

More Must-Reads From TIME

Write to Nik Popli at nik.popli@time.com