• World
  • Social Media

‘A Toxic Place for Women.’ A New Study Reveals the Scale of Abuse on Twitter

4 minute read

Amnesty International has released a study detailing the scale of threats made against women on Twitter, calling the social media platform “a toxic place” in what it claims is the “world’s biggest dataset of online abuse targeting women.”

In partnership with Element AI, an artificial intelligence software company, a so-called “Troll Patrol” group of 6,500 volunteers from more than 150 countries helped analyze millions of tweets for Amnesty’s large-scale survey, released Tuesday. The tweets surveyed were those received by 778 journalists and politicians hailing from both the U.S. and the U.K. throughout 2017, looking at the threats faced by specific racial and ethnic backgrounds as well as political affiliations.

“We are experiencing a watershed moment with women around the world using their collective power to speak out about the abuse they face and amplify their voices through social media platforms,” said Amnesty’s report. Yet the study revealed just how much abuse women face on these very platforms, with 1.1 million abusive or problematic tweets sent to the 778 women included in the dataset over the course of the year, equalling one every 30 seconds on average. Abusive tweets were categorized as those that violated the social media platform’s own rules, such as language including physical or sexual threats, while problematic tweets were defined by Amnesty as those containing hostile content, including those reinforcing negative or harmful stereotypes against a group of individuals.

The findings also show that black women were disproportionately targeted by online abuse, being 84% more likely than white women to be mentioned in abusive or problematic tweets. Women of color were also more likely to be mentioned in such tweets than white women. “Twitter’s failure to crack down on this problem means it is contributing to the silencing of already marginalized voices,” said Milena Marin, Senior Advisor for Tactical Research at Amnesty International, in a statement.

Read more: What Still Needs to Be Done to Break the Silence Surrounding Sexual Harassment

In February 2017, a video of 27-year-old activist Seyi Akiwowo speaking at the EU Parliament went viral on social media. Akiwowo promptly received a wave of online abuse and reported examples of abusive posts to platforms including Twitter, which failed to respond to her reports at the time. “The experience was like no other,” says Akiwowo, who after her experience founded Glitch, a U.K.-based non-profit organization working to end online abuse.

“I was called the n— word, and variations of that word, and Twitter’s algorithms just couldn’t keep up with the abuse I was getting,” she tells TIME, adding that she hopes Amnesty’s report will galvanize greater attention to the intersectional nature of abuse, as well as increase the pressure for funding and policy action to tackle the issue. “It’s important for us to look at the diversity of experiences that black women face when we talk about online abuse, such as misogynoir and dead-naming, and that there’s greater education teaching people how to be active bystanders and how to be a real ally to black women online.”

Amnesty’s study also showed that abuse against women did not discriminate against political leanings; the female journalists and politicians featured spanned the ideological spectrum and represented a variety of political views, and were all targeted by abuse. The tweets surveyed in the study did not include deleted tweets or tweets from accounts that had been suspended or disabled during 2017, meaning that the true scale of the abuse is likely to be greater than the study’s results indicate.

Amnesty said that despite repeated requests, Twitter has failed to provide specific data regarding the scale of abuse on the platform, as well as outline what is being done to tackle the issue. In an email response to TIME, Vijaya Gadde, Legal, Policy and Trust & Safety Global Lead at Twitter said that the company is “committed to improving the collective health, openness, and civility of public conversation” on the platform, adding “We work hard to build globally enforceable rules and have begun consulting the public as part of the process – a new approach within the industry.”

In the platform’s biannual Transparency Report released last week, Twitter said that during the first six months of 2018, more than 2.8 million unique accounts were reported on complaints of abuse, and action was taken on nearly 250,000 of these accounts. “Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on,” said Marin.

More Must-Reads From TIME

Contact us at letters@time.com