In 2018, Rachel Cullen began receiving new followers to her account on X, the platform formerly known as Twitter. A writer based in northern England, she often used the platform to promote her work, so the uptick didn’t immediately raise alarm bells. “My Twitter presence is significant enough that I initially didn't quite catch on to what was happening,” she said.
As the new followers began popping up, she says they appeared to be recently created accounts, some of which did not seem to be genuine. They would send her offensive direct messages or question her character. As she blocked and reported the accounts, more would appear. “It felt a bit like Whack-a-Mole,” she said of the experience. Though she suspected the accounts were being created by an ex-partner, the platform’s lack of identification verification made it impossible to know for sure, and she felt that the only opportunity for recourse lay within the app itself. “'It’s very difficult to prove anywhere, but at the very least I had the tools to block these accounts.”
Cullen is one of many users who has faced online harassment on X. She joined many others who spoke out against owner Elon Musk’s announcement that X would soon remove the app's “block” feature, which had long allowed users to restrict specific accounts from seeing and interacting with their content. In a post tagging Musk and X CEO Linda Yaccarino, Monica Lewinsky wrote, “please rethink removing the block feature. as an anti-bullying activist (and target of harassment) i can assure you it’s a critical tool to keep people safe online.”
In a quote tweet, Yaccarino responded, “Our users’ safety on X is our number one priority. And we’re building something better than the current state of block and mute. Please keep the feedback coming.” The company has not shared a timeline of when the feature will be removed. X did not respond to TIME’s request for comment.
Read More: The 7 Biggest Challenges Twitter’s New CEO Faces
Musk sparred with James Woods on the platform after the conservative actor questioned his plan to remove the feature. "Musk, whom I once championed, is only doing this to protect his advertisers," Woods wrote. "Users of X are mere pawns to turn the site into an electronic shopping mall. The man I thought was a defender of free speech is just another greedy capitalist. Disappointing, but not surprising." In response, Musk advised Woods to delete his account. Somewhat ironically, Woods then posted a screenshot that appeared to show Musk had blocked him.
Harassment and abuse
For targets of harassment on the platform, the feature is one of the last remaining shields against abuse and hate speech on a platform that appears to have relaxed its stance on content moderation.
X had a problem reining in harmful posts and messages long before Musk took the helm. A 2021 Pew Research survey found that 17% of adult Twitter users had personally experienced “harassing or abusive behavior” on the site.
“Twitter, like most social media platforms, does not have a great track record on protecting and supporting users in the face of hate and harassment online,” says Viktorya Vilk, director for digital safety and free expression at PEN America. “[The company] did not have an in-platform reporting feature for harassment for the first seven years that it was created.”
Marginalized groups have found themselves far more likely to experience harassment online. In 2017, an Amnesty International study surveyed millions of tweets received by nearly 800 journalists and politicians from the U.K. and U.S., and found that 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. Black women were 84% more likely than white women to be mentioned in abusive or problematic tweets.
“People are disproportionately targeted for their identity and their profession,” notes Vilk. “If you are a woman, LGBTQ, a person of color, or if you identify as a religious or ethnic minority of any kind, you are almost certainly going to experience more hate and harassment. If you are also someone who requires social media to do your work—if you're a reporter, or a writer, or an artist or human rights defenders, you're going to get more hate and harassment.”
For some time, Vilk says, it appeared as though the platform was beginning to take harassment claims more seriously–introducing new features that would allow users to block, mute, and report harmful accounts. “In the couple of years, right before Musk acquired the platform, we and other organizations noticed pretty significant progress,” she says.
Read More: A Brief History of Elon Musk Saying One Thing and Doing Another at Twitter
But following Musk’s purchase of the platform last fall, hate speech on X skyrocketed—researchers at the Network Contagion Research Institute reported a rise in engagement with anti-Semitic tweets and 500% increase in use of the n- word in the days after Musk took over. Musk also reinstated previously banned accounts, such as Andrew Tate and Andrew Anglin and dissolved the company’s Trust and Safety Council, first formed in 2016 to help shape the company’s approach to free speech and harassment on the platform.
‘A constant phantom’
Once the block feature is removed, users may still be able to use the mute function, which prevents users from seeing another user’s content on their feed, but Vilk argues that they are fundamentally different tools. “All muting does is protect you from having to see something–it doesn't actually make it stop. If you are someone who is getting death threats, you won’t know you've got a death threat at all. Blocking, on the other hand, actually cuts off content between someone who is under attack and the person who's attacking them.”
Read More: Why Your Twitter Feed Is Suddenly Full of People You Don’t Follow
The latest move by Musk is not the only way users are losing the ability to reduce their exposure to certain accounts. In 2021, after facing harassment on what was then Twitter, software engineer Tracy Chou created Block Party to help users combat hateful content on the platform. The tool helped users filter out and block accounts that were likely to be trolls–like accounts without profile photos or fewer than 100 followers. But after X began charging for access to its API, a service which previously allowed third-party services to access X’s data, the app was forced to shut down.
Read More: She Built an App to Block Harassment on Twitter. Elon Musk Killed It
In the months since, Chou says she has noticed that many users have abandoned the platform—with some posting less or leaving X entirely. “When people don't have the control they need to be able to use the platform in a way that feels good and safe to them, they will stop engaging,” she tells TIME.
In the absence of Block Party, the onus is largely on users to individually block or report harmful accounts–and Musk’s new proposal will likely leave few options for users wanting to stay on the platform if they want to avoid harmful content or harassment.
“The new class of problems is potentially that you have no invisible defense mechanisms anymore,” Chou says. “Anything you do can be very visible and escalate situations with people that you don't want to escalate with.”
The invisible defenses are essential, says Cullen, who notes that, though she never feared for her own safety, blocking the fake accounts over the years helped keep harassment largely out of sight. “I'm worried about women for whom there is a more dire or more direct threat,” she says. “It's almost like a constant phantom.”
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Simmone Shah at simmone.shah@time.com