The end of April will be a turning point for former President Donald Trump. That’s when he will learn whether he can regain control of his Facebook and Instagram accounts—and direct access to nearly 60 million followers on two of the world’s largest social media platforms.
It’s not publicly elected officials who will make this call, nor a judge, nor even Facebook itself. Instead, it will be the 20 members of the Facebook Oversight Board, a little-known panel of lawyers, journalists and former political leaders from 18 different countries that Facebook established less than one year ago.
Trump’s accounts have been suspended since Jan. 7—the day after he incited a violent mob of his supporters to storm the U.S. Capitol building in Washington. While Twitter permanently banned Trump, Facebook CEO Mark Zuckerberg said Facebook’s suspension (and Instagram’s) would last indefinitely, and at least until the end of Trump’s presidency. The news landed into polarized Americans’ feeds with predictable divisiveness: many said the decision had come too late; others—and not just supporters of the former President—derided it as unconscionable censorship. Then, the day after Trump left the White House, Facebook announced it was asking its newly created Oversight Board to decide whether to reinstate the 45th U.S. President.
The Oversight Board’s ruling on the Trump case will be a defining moment for Facebook, perhaps the biggest test yet of whether the company’s attempts to regulate itself can gain legitimacy in the eyes of ordinary people and lawmakers around the world. The decision will be watched particularly closely in the U.S. and E.U., where legislative efforts to rein in Big Tech are on the cards. “The ambition for the Oversight Board is for it to have a quasi-judicial role, and the key thing about any judicial institution is it has to have legitimacy to earn deference,” says Daniel Weitzner, the director of MIT’s Internet Policy Research Initiative. “Over time, if this body makes decisions that are seen as reasonable, and Facebook follows them, I think they’ll become a part of the landscape.”
The question of Trump’s continued access to Facebook is especially thorny for the company, given the polarized political landscape. “We’re dealing with a significant proportion of registered Republicans who question the legitimacy of the Biden Administration,” says Weitzner, who served in the Obama White House. “They’re not going to accept the Facebook Oversight Board’s legitimacy if they don’t like the result.”
And in coming to its ruling on whether to uphold Trump’s suspension, the Oversight Board could also open the door to an even bigger decision than Trump’s future on the platform: whether Facebook should change its rules to allow other elected politicians to be banned. Up until now, that has been rare, thanks to an exemption that allows political leaders to break the rules if Facebook judges that the newsworthiness of a statement outweighs the risk of physical harm.
In referring Trump’s case to the Oversight Board, Facebook also asked for “policy recommendations” about how the company should deal with “suspensions when the user is a political leader.” But an Oversight Board spokesperson told TIME that any decision would not be binding—leaving the final say up to Facebook alone.
Why Facebook created the Oversight Board
For years, Facebook has said it’s uncomfortable that it alone has the power to grant or deny access to one of the world’s information superhighways—and the attention that those kinds of decisions inevitably bring.
“I think everyone would benefit from greater clarity on how local governments expect content moderation to work in their countries,” Zuckerberg wrote in 2018. But in lots of places, government rules still aren’t tailored to legally enforce the removal of online threats, especially in countries like the U.S. where free speech is prized. “Those norms don’t exist, and in the meantime we can’t duck making decisions in real time,” Facebook’s vice president for global affairs, Nick Clegg, told the New York Times on Monday. (Facebook said Clegg was unavailable for an interview for this story.)
“Platforms have never wanted to be in a position of having to make controversial decisions,” says Weitzner, who in the 1990s was involved in drafting Section 230, the federal law that defines how platforms are held accountable for content. “They actually want to be told what to do.”
In 2018, Zuckerberg floated plans to set up a body that would be a kind of Supreme Court overseeing Facebook’s rules, staffed by an independent body of experts. In May 2020, that body came to life in the form of the Facebook Oversight Board. A co-chair, Helle Thorning-Schmidt, is a former Prime Minister of Denmark. Among the Board’s members are Tawakkol Karman, a Nobel Peace Prize-winning Yemeni activist, and Alan Rusbridger, the former editor of Britain’s Guardian newspaper. The Board is funded by a $130 million trust, set up by Facebook but legally independent, and pays each of its members a six figure sum, according to the New York Times. Facebook says its rulings will be both binding and transparent.
What the Oversight Board is doing
On Thursday, the Oversight Board announced rulings on its first five cases—a smattering of disputes about Facebook takedowns of controversial posts. In a sign that its members were prepared to overrule their progenitor, the Board overturned Facebook’s original decisions in four of the five cases, saying in a statement that its rulings “demonstrate our commitment to holding Facebook to account.” Facebook duly said it would enforce the decisions.
The Oversight Board’s public statement also hints at the bigger decision to come, on Trump. “Recent events in the United States and around the world have highlighted the enormous impact that content decisions taken by internet services have on human rights and free expression,” it said. The controversies created by those decisions, it went on, “draw attention to the value of independent oversight of the most consequential decisions by companies such as Facebook.”
There are early signs that the board’s members lean toward more permissive views on free speech—which could be a good omen for the former President. One of the five rulings the Oversight Board announced on Thursday overturned Facebook’s decision to take down an anti-Muslim post from Myanmar, where Facebook acknowledged in 2018 that it did not do enough to stop genocide against the Rohingya Muslim minority the previous year.
The post that the Board called on Facebook to reinstate included images of a dead Muslim child and a caption stating that “there is something wrong with Muslims (or Muslim men) psychologically or with their mindset,” according to a summary of the case released by the Board. The post, which Facebook had removed for violating its hate speech policies, also “seems to imply the child may have grown up to be an extremist,” the Board’s summary said. But the Board overturned Facebook’s decision to remove the post, concluding that “while the post might be considered offensive, it did not reach the level of hate speech.”
The decision worried some activists. “Facebook’s Oversight Board bent over backwards to excuse hate in Myanmar—a county where Facebook has been complicit in a genocide against Muslims,” a spokesperson for the U.S.-based NGO Muslim Advocates said in a statement. “It is clear that the Oversight Board is here to launder responsibility for Zuckerberg and [Facebook COO] Sheryl Sandberg. Instead of taking meaningful action to curb dangerous hate speech on the platform, Facebook punted responsibility to a third party board that used laughable technicalities to protect anti-Muslim hate content that contributes to genocide.”
How Facebook’s critics are responding
To many critics of Facebook, the Oversight Board is a distraction from the real issues plaguing the company: misinformation at scale, hate speech, organized violence, and the ways Facebook’s algorithms amplify those kinds of content. One group of critics has set up an alternative panel of experts, called the Real Facebook Oversight Board, and issued a statement on Thursday rubbishing the Board’s first set of decisions. “This is a PR effort that obfuscates the urgent issues that Facebook continually fails to address: the continued proliferation of hate speech and disinformation on their platforms,” the statement said.
For critics, the Oversight Board is a spectacle aimed at preserving the broad status quo: taking controversial decisions on content out of Facebook’s hands, while avoiding harder questions that might harm Facebook’s business model, like tweaking its algorithms to reduce the rapid spread of harmful content. “I think any self-regulatory effort, because that’s essentially what this is, will always fall short of a firmly rule-of-law-anchored process,” says Marietje Schaake, the international policy director at Stanford University’s Cyber Policy Center, who sits on the Real Facebook Oversight Board. “I just hope it doesn’t distract American lawmakers.”
Among the alternative board’s criticisms: the fact members of the Oversight Board were “hand-picked” by Facebook. (Facebook maintains that the Board is financially and operationally independent.) “They’re independent thinkers, but the process doesn’t set them up to be truly independent,” says Schaake. “The setup has a lot of baked-in limitations.”
One of those limitations, according to Schaake, is the Board’s jurisdiction. Currently, it can only pass rulings on whether certain posts should have been taken down by Facebook — not rule on posts that are allowed to remain online. It also cannot issue rulings on Facebook’s amplification algorithms, which many researchers say are instrumental in spreading and promoting divisive content online, or entire Facebook groups, which are a key vector for the rapid spread of harmful content like misinformation and incitement to violence. The Oversight Board says it hopes its remit will soon expand to cover posts that remain on the site, not just ones Facebook has already taken down.
Given these limitations, the headline-grabbing matter of Trump’s account is little more than a distraction, critics say. “The fact that Donald Trump is unable to express himself on Facebook is less important than the fact that all of his followers and supporters continue to express themselves on Facebook,” says Siva Vaidhyanathan, a professor of Media Studies at the University of Virginia, who is not affiliated with the alternative board. “The phenomenon that we should be worried about is the aggregate message that undermines democracy, divides societies, spreads hatred. That continues, and Facebook either can’t or won’t do anything about it.”
And even with Trump banned from Facebook, his supporters and right-wing commentators like Dan Bongino continue to dominate the list of top-performing posts on Facebook each day, according to data compiled by New York Times reporter Kevin Roose.
“Facebook continues to allow Steve Bannon to broadcast despite calling for the beheading of a government official and continuing to make claims the 2020 election was fraudulent,” wrote Roger McNamee and Maria Ressa—both members of the alternative board—in a column for TIME on Thursday. “Evidently,” they write, “we should not take Facebook’s commitment to stop hate at face value.”
What’s next for regulating Big Tech
There are no easy answers to questions of how to set binding, democratically-ordained standards for the newly-powerful social media platforms. While Facebook has received lots of criticism for the Oversight Board, no other company has established even a semi-independent body to interrogate and potentially overturn otherwise-unaccountable decisions by powerful executives.
Twitter has approached the problem differently. In a thread days after his company permanently suspended Trump, Twitter CEO Jack Dorsey said he did “not celebrate or feel pride” in the move, and went on to discuss how the events of previous days had increased the imperative to “look at how our service might incentivize distraction and harm.” The thread seemed to acknowledge the broader dynamics of algorithmic amplification—which go well beyond Trump—but his proposed solution raised some eyebrows.
He said Twitter was funding the development of a new “decentralized standard for social media,” that might contribute to a future Internet “that is not controlled or influenced by any single individual or entity.” Called Bluesky, the initiative left several unanswered questions about how the problems of algorithmic amplification and harmful content would be solved. “As a free expression advocate, there’s a lot of positives and benefits to decentralized models,” Emma Llansó, the director of the Center for Democracy and Technology’s free expression project, told the site Digital Trends. “But there are questions of, if someone posts something illegal, how will law enforcement respond?”
For Schaake and other critics of Big Tech, the only enduring solution is for governments to reclaim the powers of gatekeeping the public square that the tech companies usurped. But that will take time. During that time, projects like Facebook’s Oversight Board will have a chance to win—or lose—public approval.