There’s an old joke among Twitter employees that being on the platform is like playing a huge online multiplayer game where every day there’s a different main character—meaning a person who’s critiqued, harassed, or otherwise shoved into the spotlight. According to the joke, you have just one goal in the game of Twitter: never become that main character yourself.
One day in 2018, Twitter’s main character was Vernon Unsworth, a British diver who’d spent days assisting the rescue of a group of Thai boys trapped in a flooded cave. After billionaire Elon Musk offered a miniscule submarine to the rescue divers, Unsworth told the media that Musk’s idea was just a useless PR stunt. Musk then took to Twitter, where (in tweets that he later deleted) he baselessly accused the man of being a “pedo guy,” or pedophile. The tweets prompted hundreds of Musk fans to pile on to the diver with abusive, humiliating attacks. Musk subsequently apologized for the tweets in court, saying he did not mean for them to be taken literally.
The saga was an example of dogpiling: a phenomenon in which powerful Twitter users spur legions of their fans to harass someone else. For years, teams of Twitter employees have been working—albeit with limited successes—to reduce dogpiling and other common forms of abuse.
On April 25, those Twitter employees learned that Musk, architect of the “pedo guy” saga, could become their new boss—after the board accepted a $44 billion dollar bid from the world’s richest man.
In statement announcing that Twitter had agreed to let him purchase the social network, the CEO of Tesla and SpaceX spoke in grandiose terms familiar to anyone who follows his pronouncements on colonizing Mars or building electric vehicles: “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.”
But many on the frontlines of the fight for democratic spaces online have questioned whether Musk’s ownership of Twitter would undermine, rather than bolster, democracy. To employees who had witnessed Musk’s own behavior on the platform, the billionaire’s words about freedom of speech rang hollow. With more than 85 million followers, Musk has used his influential account to not only direct insults at critics and share memes about going to the bathroom, but also, according to regulators, to make “false and misleading public statements” that boosted Tesla’s stock price and harmed investors.
Some Twitter employees believe this record bodes especially poorly for the company’s anti-harassment efforts. “Multiple times, his followers have been the perpetrators of targeted harassment, and the use of his profile has encouraged dogpiling—which are the exact behaviors we’re trying to limit,” said an employee on Twitter’s platform health team, which works on making the site a safer online space for users. “Since Trump was banned, Musk has become Twitter’s power user number one,” the person said. The employee, who was not authorized to speak publicly, added that they feared a Musk acquisition would at the very least reduce user trust in Twitter’s anti-abuse efforts, and at worst could result in the work being deprioritized or canceled.
Members of marginalized communities—who are disproportionately the victims of online threats and abuse—are among those most protected by Twitter’s current content moderation system. Activists from these communities share Twitter employees’ concerns that those protections could be rolled back. “If Elon Musk were to take over, the damage that would be done would spread from Twitter workers not being able to implement the things they need in order to keep the platform safe,” Jelani Drew-Davi, a campaign manager at the digital civil rights group Kairos, told TIME in the days leading up to the deal. As an example of Musk’s record on similar matters, Drew-Davi cited a lawsuit alleging a culture of rampant racist abuse toward Black workers in a Tesla factory in California.
Since the explosion of social media usage more than a decade ago, researchers and technologists have forged an understanding of the ways that the design of social media sites has an impact on civic discourse and, ultimately, democratic processes. One of their key findings: sites that privilege free speech above all else tend to become spaces where civic discourse is drowned out by harassment, restricting participation to a privileged few.
That finding has informed Twitter’s recent work. While the company does remove tweets and ban accounts of severe offenders, much of its current approach focuses on nudging users to be kinder. Before Musk’s bid, one of the platform’s stated priorities was facilitating “safe, inclusive, and authentic conversations.” It has also pledged to “minimize the distribution and reach of harmful or misleading information, especially when its intent is to disrupt a civic process or cause offline harm.” In cases where tweets are found to be bad for civic discourse but not illegal—like misinformation or insults—tweets can be removed from recommendation algorithms, meaning that Twitter doesn’t boost them into the feeds of users who do not follow their author directly, rather than deleted from the platform entirely. It is unclear whether these policies will continue under the ownership of Musk, who has railed against what he calls “shadow bans.”
“In a way, [Musk’s] goals are aligned with ours in that we are certainly interested in protecting democracy,” says the Twitter employee on the health team. “But the idea of bringing more free speech to the platform exposes his naiveté with respect to the nuts and bolts. A lot of platforms [have been] founded on this free speech principle, but the reality is that either they become a cesspool that people don’t want to use, or they realize that there is actually the need for some level of moderation.”
Business analysts point out that content moderation is good for profits, too. “Without vigorous content moderation, the platform Musk seeks to own would be swamped by spam, porn, anti-vaccination misinformation, QAnon conspiracies, and fraudulent campaigns to undermine the midterms and 2024 presidential election,” said Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, in a statement. “That’s not a business that most social media users or advertisers would want to associate with.”
Musk’s takeover deal wasn’t a straightforward tale. It took several twists and turns, as funding looked doubtful and Twitter’s board of directors seemed reticent, adopting a strategy known as a “poison pill” to ward off a takeover. Throughout, Musk cast his quest as flying in the face of intransigent Silicon Valley elites. His statements on free speech often align with Republican talking-points that conservatives are being unfairly censored by tech companies, and—in a move that could open the door for former President Donald Trump’s return to the platform—Musk has said that he would prefer “time-outs” for users who break the site’s rules, rather than permanent bans. (Twitter banned Trump permanently after Jan. 6, 2021, for incitement to violence during his attempt to undemocratically overturn the results of the 2020 election.)
The debate over transparency on Twitter
Alongside vague commitments including adding an edit button and getting rid of spam on the platform, Musk’s most substantial call has been for Twitter to be more transparent about its decision-making. He wants it to “open source” its algorithm, so users can find out when Twitter has stopped recommending their tweets to other users. “That action should be made apparent,” he said at an April 14 TED conference, “so there’s no behind the scenes manipulation, either algorithmically or manually.”
But employees who work in the trenches of content moderation say that, while total transparency is a noble goal, informing users about which specific tweets are being “down-ranked” would in practice give bad actors helpful information about how to evade limits on spam, misinformation and hate speech. Indeed, Twitter is already among the most transparent of all social networks in terms of sharing how its algorithm works, as well as researching its own flaws and sharing the results publicly.
That research suggests that in practice, more conservative views may have benefited most from the design of Twitter’s algorithm. Last October, Twitter released research showing that its algorithm was acting suspiciously: in the runup to the 2020 election in the U.S., rightwing partisan news sources received a greater boost from Twitter’s algorithm than moderate or left-leaning news sources. The research also found a similar effect for politicians in six out of the seven countries studied, including the U.S. It showed that Twitter’s algorithm recommended, to more users, posts by politicians from mainstream rightwing parties than those from centrist or leftwing parties.
Six months on, that team is continuing its work looking at algorithmic bias, amid suggestions from some conservatives that such work means meddling with freedom of speech. Early indications suggest, according to Twitter, that the platform’s boosting of center-right politicians isn’t an intrinsic quality of its algorithm. Instead, researchers have found that amplification shifts over time in line with the topics people care about and changes in how users behave. The data is helping the researchers begin to arrive at an understanding of Twitter as a “sociotechnical system,” with definitions about what constitutes normal and abnormal levels of algorithmic amplification of political content. Such an understanding may one day allow the company to intervene when dangerous real-world events are unfolding. But doing so would be a political intervention necessarily based on Twitter’s values as a company. Overnight, those values appear to have changed from “facilitating healthy conversation” to Musk’s self-professed free speech “absolutism.”
On Twitter, where discourse is limited at 280 characters per tweet, nuanced discussion of complex research and value judgments isn’t easy—and in the febrile climate, even Twitter’s own employees run the risk of becoming Twitter’s dreaded main character. Rumman Chowdhury, the leader of the team that carried out the algorithmic amplification research, suggested in a series of tweets that she was opposed to Musk buying the company, though she did not say that this was out of a fear that his takeover would spell an end to her work. Instead, her comments appeared to reference his capacity to weaponize Twitter mobs against critics. “Musk’s immediate chilling effect was something that bothered me significantly,” she wrote. “Twitter has a beautiful culture of hilarious constructive criticism, and I saw that go silent because of his minions attacking employees.” Soon enough, she muted her notifications on the thread, adding: “the trolls have descended.”
—With reporting by Julia Zorthian
- Who Will Be TIME's Person of the Year 2023?
- Why Cell Phone Reception Is Getting Worse
- The Dirty Secrets of Alternative Plastics
- Column: It's Time to Scrap the Abraham Accords
- Israeli Family Celebrates Release of Hostage Grandmother
- In a New Movie, Beyoncé Finds Freedom
- The Top 100 Photos of 2023
- Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time