Facebook promoted pages that shared pro-military propaganda in Myanmar, even after it banned accounts linked to the military from the platform due to human rights abuses and the risk of violence, according to a report by the human rights group Global Witness.
Myanmar’s armed forces, known as the Tatmadaw, overthrew the country’s civilian government in February, claiming that elections in November 2020 had been rigged. Later that month, Facebook said it had decided to ban the Tatmadaw from its platform, citing the military’s history of human rights abuses, record of spreading misinformation and the increased risk of violence after the coup.
In April, Facebook introduced new Myanmar-specific rules against praising or supporting the military for arrests or acts of violence against civilians. It also banned praise of protesters who attack the military or security forces. But according to Global Witness, Facebook’s own recommendation algorithms have been inviting users to like pages that share pro-military propaganda that violates the platform’s rules.
The report highlights the extent to which Facebook is still struggling to police its own platform in Myanmar, where in 2018 the social media company admitted it could have done more to prevent incitement of violence in the run-up to a military campaign against the Rohingya Muslim minority the previous year. U.N. investigators said the campaign, which involved mass murder, rape and arson, was carried out with “genocidal intent.” More than 1.3 million people fled the violence across the border to Bangladesh, where many remain in refugee camps, according to the World Health Organization. Myanmar has repeatedly denied that the campaign was genocidal.
Read more: Facebook’s Ban of Myanmar’s Military Will Be a Test of the True Power of Social Media Platforms
Since the period of violence against the Rohingya people, Facebook has hired more than 100 Burmese-speaking content moderators to monitor the platform for hate speech, and has built algorithms to detect hatred. But observers say hate and incitement to violence are still widespread on the platform in the wake of the military coup, in part because those algorithms are still rudimentary, and because the platform is not doing enough to stop repeat offenders from returning after being banned.
“This points to Facebook’s continued failure to effectively enforce their policies,” says Victoire Rio of the Tech Accountability Initiative, who has been engaging with Facebook on harmful content in Myanmar since 2016.
The pages that Global Witness found Facebook was recommending hosted posts including a “wanted” poster bearing the name and two photographs of a woman, offering a $10 million reward for her capture “dead or alive.” The post claimed the woman was among protesters who burned down a factory, Global Witness said. “This girl is the one who committed arson in Hlaing Tharyar. Her account has been deactivated. But she cannot run,” the caption read, according to the report.
The pages also shared a video of a forced confession by a political prisoner, Global Witness said, as well as a video of an airstrike by the Myanmar military against rebel forces, accompanied by laughter and a caption reading: “Now, you are getting what you deserve.” Global Witness also found several examples on the pages of content that supports violence against civilians, the campaign group said.
“We didn’t have to dig hard to find this content—in fact it was incredibly easy,” Global Witness said in its report. The group said it found the content after typing “Tatmadaw” into the platform’s search box in Burmese, and clicking “like” on the first page that appeared. The rights group then “liked” the first five “related pages” that Facebook suggested. Three of those five pages contained content that violated Facebook’s policies, the report said.
Facebook has removed some of the posts and pages in the Global Witness report, a spokesperson for the company said.
In a statement, Facebook said: “Our teams continue to closely monitor the situation in Myanmar in real-time and take action on any posts, Pages or Groups that break our rules. We proactively detect 99 percent of the hate speech removed from Facebook in Myanmar, and our ban of the Tatmadaw and repeated disruption of Coordinated Inauthentic Behavior has made it harder for people to misuse our services to spread harm. This is a highly adversarial issue and we continue to take action on content that violates our policies to help keep people safe.”
Read more: Facebook Says It’s Removing More Hate Speech Than Ever Before. But There’s a Catch
But activists say Facebook’s statistics mask broader failures. “While Facebook proudly claims that it is self-detecting a higher percentage of the content it removes, this does not account for the very large volume of problematic content that continues to span the platform undetected,” says Rio, although she notes that Facebook is now removing much more problematic content than it used to.
One weakness in Facebook’s approach to problem accounts so far is that repeat offenders are able to easily return to the platform with new profiles, even after being banned, Rio says.
“It is very likely that the admins behind these pages are known problematic actors, posting problematic content not just on the pages but also on their profiles,” Rio tells TIME. “Facebook has very little capacity to deal with recidivism, so it’s often the same people coming back after getting banned, often with the same name and the same photo. Though Facebook has policies against recidivism and the use of multiple accounts, it is not enforcing them,” she says.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Billy Perrigo at billy.perrigo@time.com