Facebook’s parent company Meta has been accused of “whitewashing” a long-awaited report on its human rights impact in India, which the company released in a highly summarized form on Thursday, drawing fire from civil society groups.
TIME first reported in August 2020 that Facebook had commissioned the human rights impact assessment (HRIA), in an effort to determine its role in the spread of hate speech online. The report has been anticipated for nearly two years by rights groups who have long raised the alarm that Facebook is contributing to an erosion of civil liberties in India and to dangers faced by minorities.
Ankhi Das, Facebook’s most senior executive in India, resigned in October 2020 after the Wall Street Journal reported she had intervened to prevent the platform removing accounts of members of the country’s Hindu nationalist ruling party, some of whom had called for violence against India’s Muslim minority. India is Facebook’s largest market by users.
The India HRIA was carried out by an independent law firm, Foley Hoag, which interviewed more than 40 civil society stakeholders, activists, and journalists to complete the report. But Facebook drew criticism from rights groups on Thursday after it released its own four-page summary of the law firm’s findings that was almost bereft of any meaningful details.
Ritumbra Manuvie, an academic who was one of the civil society members interviewed by Foley Hoag for the report, said Facebook’s summary was a “cover up of its acute fault-lines in India,” and showed that its “commitment to human rights is rather limited.”
The Real Facebook Oversight Board, a pressure organization made up of critics of the platform, said in a statement that the report was “a master-class in spin and obfuscation” and a “whitewashing [of] the religious violence fomented in India across [Meta’s] platforms.”
Facebook’s summary of the report, the full version of which was not made public, says that Foley Hoag made “recommendations” to the company on how to improve its human rights impact in India. But Facebook’s summary did not disclose what those recommendations were.
The four-page summary says: “The HRIA developed recommendations covering implementation and oversight; content moderation; and product interventions; and other areas.” It then details in the following seven paragraphs the human rights measures that Facebook is already taking in India, including increasing its content moderation workforce and bolstering transparency.
Facebook adds that the full report does not make any judgment on the most contentious allegation stemming from the Das controversy in 2020: that its moderation of hateful content in India is biased toward the ruling party so as to maintain market access. “The assessors [Foley Hoag] noted that civil society stakeholders raised several allegations of bias in content moderation,” Facebook’s summary of the report says. “The assessors did not assess or reach conclusions about whether such bias existed.”
“We balance the need to publish these reports while considering legitimate security concerns,” a Meta spokesperson said in a statement. “While we don’t agree with every finding, we do believe these reports guide Meta to identify and address the most salient platform-related issues.”
Foley Hoag did not respond to a request for comment.
“Facebook may as well have published a few blank pages on their human rights impact assessment (HRIA) on India,” Alaphia Zoyab, the director of campaigns and media for the progressive tech lobby group Luminate, said in a tweet. “I’ve never read so much bull—t in four short pages.”
“This is an insult to Indian civil society,” Zoyab added.
Manuvie, who is a legal scholar at the University of Groningen in the Netherlands, said that the foundation she runs, The London Story, had reported more than 600 pages that it says are hate accounts based in India to Facebook—but that the platform had only removed 16 of them.
“As stakeholders, we told [Foley Hoag] very clearly that Facebook has provided momentum for fringe groups to organize, hunt and doxx inter-faith marriage couples,” Manuvie told TIME. Facebook’s summary of the report contains no mention of this specific form of platform abuse.
In 2021, TIME reported that Facebook allowed a Hindu nationalist conspiracy theory to flourish on its platform, despite employees at Facebook warning of the risks. One video of an extremist preacher calling for Hindus to rise up and kill Muslims racked up 1.4 million views but was not deleted until TIME contacted Facebook about it last November.
Earlier that year, TIME reported that Facebook banned a Hindu extremist group under its terrorism policies—but left most of its pages online for months after that ban, allowing them to share content depicting Muslims as green monsters with long fingernails to their more than 2.7 million total followers.
- The Man Who Thinks He Can Live Forever
- Rooftop Solar Power Has a Dark Side
- How a Government Shutdown Could Affect You
- Colleges Get Creative to Boost Mental Health
- Is It Flu, COVID-19, or RSV? Navigating At-Home Tests
- Paul Hollywood Answers All of Your Questions About The Great British Baking Show
- How Canada and India's Relationship Crumbled
- Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time