• World
  • myanmar

Meta’s Facebook Algorithms ‘Proactively’ Promoted Violence Against the Rohingya, New Amnesty International Report Asserts

10 minute read

Amnesty International on Wednesday accused Facebook’s parent company Meta of having “substantially contributed” to human rights violations perpetrated against Myanmar’s Rohingya ethnic group.

In a new report, Amnesty claims that Facebook’s algorithms “proactively amplified” anti-Rohingya content. It also alleges that Meta ignored civilians’ and activists’ pleas to curb hate-mongering on the social media platform while profiting from increased engagement.

Facebook’s seeming inability to manage online hate speech and disinformation has become a major offline problem for many countries across the globe. Amnesty is calling for the tech giant to provide reparations to affected communities.

Read More: Inside Facebook’s African Sweatshop

The Rohingya have been persecuted by Myanmar’s Buddhist majority for decades, but Facebook has exacerbated the situation, Amnesty says. The human rights group claims that the Tatmadaw, Myanmar’s armed forces, used Facebook to boost propaganda against the Rohingya and to amass public support for a military campaign of rampant killings, rape and arson targeting the predominantly Muslim minority in August 2017.

In the aftermath, more than 730,000 Rohingya in the western Rakhine state were forced to take refuge in camps in neighboring Bangladesh. Today, more than a million Rohingya are living in exile, and Myanmar’s military leaders are facing charges of genocide at the International Court of Justice.

A U.N. fact-finding mission in 2018 determined that Facebook had been a “useful instrument” for vilifying the Rohingya in Myanmar “where, for most users, Facebook is the internet.” Months later, Meta released a commissioned human rights impact report in which it admitted that the company was not doing enough to stop the sowing of hatred against the Rohingya on the platform. Meta has since said it has invested in more Burmese-speaking content moderators and improved technology to address the problem.

Amnesty analyzed internal Meta documents released by whistleblower Frances Haugen in 2021, as well as various public reports, and it conducted interviews with Rohingya activists and former Meta staff. It concludes that Facebook’s parent company—then known as Facebook Inc.—was made aware of its role in contributing to the atrocities against the Rohingya ethnic group years before 2017, and it both failed to heed such warnings at the time and took “wholly inadequate” measures to address issues after the fact.

Read More: Inside Frances Haugen’s Decision to Take on Facebook

Lead researcher Pat de Brun told TIME the Amnesty report shows the “clear and severe danger” Meta and its engagement-based business model pose to human rights, at-risk communities and conflict-affected areas.

The report cites an unnamed former Meta employee who told Amnesty in April that, based on their experience, they believed the social media company treated the lives of those in the Global South as less worthy of attention. “Different countries are treated differently,” the employee said. “If 1,000 people died in Myanmar tomorrow, it is less important than if 10 people in Britain die.”

“Meta must pay for the harms that they’ve caused, and they have a responsibility under international human rights law and standards to do so,” de Brun added.

In a statement by email, Rafael Frankel, Meta’s Asia-Pacific director of public policy, told TIME that “Meta stands in solidarity with the international community and supports efforts to hold the Tatmadaw accountable for its crimes against the Rohingya people.”

Frankel did not answer questions about specific measures and practices the company has taken but noted that it has disclosed information voluntarily to investigative bodies.

He stated: “Our safety and integrity work in Myanmar remains guided by feedback from local civil society organizations and international institutions, including the U.N. Fact-Finding Mission on Myanmar; the Human Rights Impact Assessment we commissioned in 2018; as well as our ongoing human rights risk management.”

Meta and anti-Rohingya content

In its report, Amnesty concludes that Meta was made aware as early as 2012 of how its engagement-based algorithms were contributing to serious real-world harm in Myanmar. It alleges that the company has over the last 10 years willfully disregarded known human rights risks on its platform and implemented inadequate solutions, prioritizing profit over users’ safety.

Htaike Htaike Aung, a digital researcher who documents the history of the internet in the Southeast Asian country through the Myanmar Internet Project, tells TIME she met with senior Facebook executives about the social media platform’s effects in 2012 and 2013. “It felt like talking to a void,” she says.

Some of Facebook’s well-intentioned measures have backfired. In 2014, years before the military seized government control, Facebook supported a civil society-led campaign against hate speech by creating virtual “sticker packs” for users to post in response to violent and discriminatory content. But as people did so, Facebook’s algorithm registered the responses as engagement and further increased the visibility and spread of the harmful content, an activist who was involved in the initiative told Amnesty.

Read More: Facebook Will Not Fix Itself

The Amnesty report says Meta’s content moderation practices have been no match for the sheer amount of algorithmically boosted inflammatory, anti-Rohingya sentiment. In mid-2014, Amnesty claims the company had only one Burmese-speaking content moderator — based in Dublin, Ireland — to monitor the posts of Myanmar’s 1.2 million active users at the time. In interviews conducted by Amnesty, Rohingya refugees recalled how their reports of posts on the platform thought to violate Facebook’s community standards were often ignored or rejected. An internal document from July 2019, cited by the Amnesty report, said that action was only taken against “approximately 2% of the hate speech on the platform.”

Facebook Papers Lost in Translation
Rohingya refugees look at a cellphone at the Kutupalong refugee camp in Bangladesh on Jan. 14, 2018.Manish Swarup—AP

In November 2018, Meta announced, among other measures, that it had onboarded 99 Myanmar language content moderators. (The current number of such reviewers, tasked with monitoring the posts of the Southeast Asian country’s estimated 20 million Facebook users, is unknown.) Anti-Rohingya sentiment has nevertheless flourished on Facebook, the rights group says.

Based on its review of internal documents, Amnesty determined that in 2020, a video of an anti-Rohingya Buddhist monk amassed 70 percent of its views on Facebook through “chaining”—the automatic playing of a recommended video after one ends—even though Meta had banned the monk’s Facebook profile for hate speech in 2018.

Read More: Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch

Besides the algorithms, Amnesty said other Facebook features incentivized publishers to post anti-Rohingya content. The Instant Articles feature, for example, which was rolled out in 2015 and allows the posting of news-format stories directly on Facebook, caused clickbait and sensational content to flourish. Meta also directly profited from paid advertising by the Tatmadaw, Amnesty added.

To date, Meta has championed the use of artificial intelligence to improve detection of harmful content. But this, too, is falling short. In March, a report from Global Witness found that Facebook’s AI-approved advertisements containing hate speech targeting Rohingya.

For its part, the company is taking more steps to address human rights issues stemming from its platform’s use in Myanmar. In February of last year, amid a military takeover of Myanmar, Meta banned the Tatmadaw and other state-sponsored entities on Facebook and Instagram. And in its July 2022 Human Rights Report, the company outlined other Myanmar-specific measures it’s taken, such as a ‘Lock your profile’ feature to provide users who may be targeted for harassment or violence with greater privacy.

Victoire Rio, a digital rights researcher based in Myanmar whose studies were cited in the Amnesty report, agrees there are flaws in Facebook’s recommendation algorithms. She also expresses concern that Meta does little due diligence on users of its advertising and news services. “The problem here is not just one of Facebook not investing enough in content moderation, but the platform itself not being neutral,” Rio says.

What Amnesty says Meta owes the Rohingya

Amnesty claims its findings justify increased regulation of the tech sector as well as the Rohingya’s demands for reparation. The organization called on Meta to support Rohingya victims’ legal, medical and psychological care, and compensate them based on the opportunities they lost.

Rohingya groups in Cox’s Bazar have directly asked Meta to fund a $1 million education project for children and adults in refugee camps. “I really believe that we deserve a remedy from Facebook,” Sawyeddollah, a 21-year-old Rohingya activist living in a refugee camp in Bangladesh, told Amnesty. “Facebook cannot remake our lives as before; only we can do that. But what we need is education to do it.”

Read More: ‘We’re Not Allowed to Dream.’ Rohingya Muslims Exiled to Bangladesh Are Stuck in Limbo Without an End In Sight

In its report, Amnesty argues that the $1 million remediation request represents a drop in the bucket compared to what Meta makes. The tech firm’s revenues in 2021 hit nearly $118 billion, with a post-tax profit of $39.3 billion. It would also be just a fraction of what the Rohingya require, Amnesty points out, citing a U.N. figure estimating their total educational need to be more than $70 million.

Facebook’s Director of Human Rights Miranda Sissons reportedly rejected the education fund proposal last year, saying the company “doesn’t directly engage in philanthropic activities,” though she reaffirmed Meta’s commitment to engaging with the Rohingya community, including refugees in Cox’s Bazar and other camps.

The Amnesty report concludes: “Meta’s refusal to compensate Rohingya victims to date—even where the community’s modest requests represent crumbs from the table of the company’s enormous profits—simply add to the perception that this is a company wholly detached from the reality of its human rights impacts.”

Read More: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

Amnesty has also called for increased independent monitoring of the tech sector. In just the last few years, lawmakers and advocates around the world have been trying to rein in social media companies, though it’s a challenging and sometimes controversial endeavor.

“These companies have been incredibly effective at selling a narrative that says: if you regulate us, if you address the most harmful aspects of our business, you will fundamentally make the internet inaccessible for all the reasons that people depend on it,” Amnesty’s de Brun says.

But “these technologies fundamentally shape how human society works nowadays and how we interact with each other,” he adds. “There’s no reason that this business model needs to dominate.”

More Must-Reads From TIME

Contact us at letters@time.com