British Prime Minister Boris Johnson was set to urge social media companies to take action to stamp out racial abuse on their platforms on Tuesday, after the profiles of Black England soccer players were bombarded with racist comments.
Johnson was set to reiterate “the urgent need for action” by social media companies against “utterly disgraceful” racist abuse of soccer players online, his spokesperson told journalists Tuesday.
Marcus Rashford, Jadon Sancho and Bukayo Saka, three Black players in England’s soccer squad, were racially abused on Twitter and Instagram following the team’s defeat during the Euro 2020 championship final against Italy on Sunday. Each player had failed to score in the penalty shootout at the end of the game.
The players were targeted with racist abuse including monkey and banana emojis and racial slurs after the game, both by anonymous accounts and those bearing what appeared to be users’ real names.
Twitter and Facebook, which owns Instagram, said they acted quickly to take down the torrents of abuse on the players’ profiles. But many users complained that racist comments were visible hours after they were posted, and in some cases were not removed even after being reported.
The Football Association, English soccer’s governing body, said in a statement it was “appalled by the online racism that has been aimed at some of our England players on social media.”
Major English footballing bodies have previously called out Twitter and Facebook for what they say is a systemic failure to prevent their platforms being used to spread racism and harassment.
In April, the Premier League boycotted Facebook, Instagram and Twitter for three days to “demand change” from social media platforms. “Social media companies must do more to eradicate online hate,” the Premier League said in a statement at the time.
Here’s a look at why, even after public warnings, the social media platforms are still struggling to stamp out racist abuse faster than it appears—and what it might take to fix the problem.
Why is racist abuse still possible on social media?
At the same time as the Premier League’s boycott of social media in April, the league and several other English footballing bodies sent a letter to the CEOs of Facebook and Twitter, calling on the platforms to adopt specific policies. Their suggestions included filtering and blocking posts with racist language before they are sent, and operating “robust, transparent and swift” measures to remove content that slips through.
Facebook and Twitter say that racist abuse is not allowed on their platforms, but the fact that much of it still slips through reveals something about how their rules are enforced.
The platforms are reluctant to ban specific words, phrases or emojis outright, because users can simply adapt their language to get around those curbs. Furthermore, sometimes the victims of racism may choose to reclaim slurs, or publicize them to raise awareness of abuse. So much of the meaning in language comes from context, and outright censorship is too blunt a tool to take that into account.
At the same time, social media platforms with billions of users are too large to rely entirely on human moderators to examine the specific context of every post. As a result, large platforms like Facebook and Twitter have built special algorithms that try to understand people’s comments and remove hate speech where they find it.
The problem is, those algorithms don’t always work very well. After the England players missed their penalties, several users shared images on social media of Instagram’s moderation system responding to reports of posts containing racial slurs and emojis saying “our technology has found that this comment probably doesn’t go against our Community Guidelines.”
Many online commenters criticized the platforms for not simply assigning more human moderators to proactively monitor comments on the three players’ profiles, given the heightened risk of racist abuse after such a high-profile sporting event.
Some Facebook employees agreed, according to BuzzFeed reporter Ryan Mac, who, cited posts on the company’s internal forum, Workplace in a series of tweets. “It seems this was totally preventable,” one employee reportedly wrote, citing the prominence of racist abuse online throughout the last football season. “We get this stream of utter bile every match, and it’s even worse when someone black misses,” another employee wrote, according to Mac. “We really can’t be seen as complicit in this.”
Should platforms stop allowing anonymous users?
The Premier League has also called on the social media platforms to subject all users to an “improved verification process” that would allow law enforcement to identify the people behind any accounts that are involved in racist abuse.
“Players, match officials, managers and coaches of any origin and background and at any level of football should be able to participate in the game without having to endure illegal abuse,” the Premier League said in its April letter to the social media companies. “We, the leaders of the game in English football, will do everything we can to protect them, but we cannot succeed until you change the ability of offenders to remain anonymous.”
This is something the platforms already do in some cases. For example, people seeking to post political advertisements on Facebook must confirm their identity with official documents and confirm their addresses by receiving a verification code through the mail.
Other observers have warned that forcing users to verify their identity wouldn’t stop all racist abuse because much of it occurs out in the open. A U.K. real estate agent chain, Savills, suspended an employee after tweets targeting Black England footballers with racial slurs were posted from an account bearing his real name. The employee reportedly said his account had been hacked.
Some experts say that taking away the option to remain anonymous online could harm vulnerable users. “Ending anonymity isn’t the easy solution you think it is,” said Alex Krasodomski, the director of the Centre for the Analysis of Social Media at Demos, a London-based think tank, on Twitter. “Vulnerable groups need anonymity more than racists do. Difficult subjects are often only accessible to some through anonymous communications: mental health, experience of end of life, sexual and gender identity and so on. We have to protect this.”
What do the platforms say?
Without commenting on the Premier League’s specific policy suggestions, a Twitter spokesperson said in a statement that the company had removed more than 1,000 tweets related to the abuse of England players, and permanently suspended “a number” of accounts. “The abhorrent racist abuse directed at England players has absolutely no place on Twitter,” the spokesperson said.
“We will continue to take action when we identify any Tweets or accounts that violate our policies. We have proactively engaged and continue to collaborate with our partners across the football community to identify ways to tackle this issue collectively and will continue to play our part in curbing this unacceptable behaviour—both online and offline.”
A Facebook spokesperson said in a statement: “No one should have to experience racist abuse anywhere, and we don’t want it on Instagram. We quickly removed comments and accounts directing abuse at England’s footballers on Sunday and we’ll continue to take action against those that break our rules.”
“In addition to our work to remove this content, we encourage all players to turn on Hidden Words, a tool which means no one has to see abuse in their comments or DMs,” the Facebook spokesperson said. “No one thing will fix this challenge overnight, but we’re committed to keeping our community safe from abuse.”
This story was updated on July 14, 2021, to include a statement from a Facebook spokesperson.