Elon Musk’s bid to acquire Twitter in a deal worth $44 billion has sparked debate and concern about what the platform will look like once he takes over. The world’s richest man will steer the future of a small but influential platform of more than 200 million users, inheriting a heated debate over tackling issues of hate speech, misinformation and polarization.
Musk has alarmed Twitter employees and digital safeguarding experts by describing himself as a “free speech absolutist,” sparking fears Twitter’s content moderation regime—while imperfect—could be jettisoned.
The need to improve online spaces is an ongoing conversation, often playing out on Twitter itself. Women, people of color and marginalized groups navigate a treacherous internet environment online. For example, research by Amnesty found 29% of U.S. women users faced threats of physical or sexual violence online.
Read more: Twitter Has Helped Build Social Movements Across the World. Now Organizers Fear For Its Future
Here are four suggestions from experts to reform social media that don’t cost $44 billion.
Radically increase transparency
The greatest problems afflicting platforms like Twitter and Facebook are harassment, abuse and the spread of misinformation.
Internally, Twitter has taken strides to get a grip. But currently, “it’s a black box,” according to Michael Kleinman, director of Amnesty’s Silicon Valley Initiative. “It’s still unclear after all these years how they decide what is allowable and what is not,” he tells TIME. It is also in some cases unclear which actions are working and which aren’t.
One idea is to be radically transparent. That means sharing curation algorithms, content moderation decisions, reporting outcomes, effectiveness of action taken, the number of appeals and their upshot, and so on. (Musk has expressed a willingness to “open source” Twitter algorithms, but some digital security experts cautioned this cannot be a substitute for good content moderation.)
This would help accrue “a kind of common law,” says Lucas Graves, assistant professor at the University of Wisconsin-Madison. Now, we have the equivalent of secret courts; their decisions are unseen and judgements forgotten. Transparency “pushes back against the arbitrariness” of executives, Graves says.
Standardize and professionalize content moderation
“I would immediately scale up the number of human moderators,” says Matthias Kettemann, professor of innovation law at the University of Innsbruck. Reliance on algorithmic moderation has clear limits. “Humans are better than machines at recognizing nuances in speech, distinguishing between humor and hate,” assessing posts in context, says Kettemann.
Ideally, moderators perform roles less like firefighters and more like planners, spotting risks ahead of time.
Moderation needs an injection of local expertise. “We see a need for companies to address a chronic underinvestment in user safety outside of North America and Western Europe,” Deborah Brown, senior researcher and advocate on digital rights at Human Rights Watch, tells TIME.
These measures will take significant cash. Especially for smaller platforms, like Snapchat and Reddit, which relies solely on voluntary moderators and has had its issues with hate speech. But for Kettemann, it’s worth it. “The sheer amount of money that Elon is spending, if a fraction of that was spent on content moderation, it would go extremely far towards creating a better online experience.”
Read more: Why Elon Musk’s Plans to ‘Fix’ Twitter Will Be Harder to Implement Than He Thinks
Moderation of existing guidelines also needs enforcing equally, especially when public figures break the rules. It took former President Donald Trump multiple infringements to be banned from Twitter, allowing him to spread baseless misinformation over vote counts.
One model of professionalizing moderation, says Graves, is for tech companies to cede control to independent experts. For instance, Facebook’s partnership with the International Fact-Checking Network.
But, as Musk is likely to discover, social media moderation is complex. Rasmus Kleis Nielsen, director of the University of Oxford’s Reuters Institute, says that political polarization means there is no agreement on what good and bad moderation, or even “harmful” content, looks like.
The wellbeing of content moderators must also be taken into account. Content moderators for Meta-owned Facebook and Whatsapp have raised concerns about their treatment and working conditions. “It can be quite traumatic work,” Human Rights Watch’s Brown says.
Read more: Inside Facebook’s African Sweatshop
Empower users and center human rights
Platforms have been designed to maximize user attention to sell advertisements, Samuel Woolley, assistant professor at Moody College of Communication at the University of Texas at Austin, tells TIME. That means that “user wellbeing, trust, safety” are secondary without a reorientation, he says.
User influence over content and privacy are crucial. This will require “cutting off a revenue stream based on pervasive surveillance,” Brown says.
Platforms have typically “rushed to capture new markets,” Brown adds, without “carrying out human rights due diligence.” That has led to some catastrophic consequences—like Facebook’s promotion of posts supporting Myanmar’s military, despite widely being accused of committing genocide in the country.
Regulate robustly on a global scale
Ultimately, there’s a limit to what platforms will do voluntarily. The European Union is making moves to compel social media companies to clean up their act.
The E.U. has agreed two pieces of landmark legislation this spring: the Digital Services Act, which would force platforms to tackle misinformation and disclose how they amplify divisive content, and the Digital Markets Act, which would prevent big tech firms dominating digital markets. And there’s a stick attached: skirt the legislation, and platforms could be fined billions of dollars.
Similar measures in the U.S. policing online speech could run into First Amendment trouble, and antitrust progress is hampered by Congressional polarization. But the E.U. legislation forcing companies to adapt their practices in one region could result in those changes being applied in other jurisdictions. “A rising tide lifts all boats,” Kleinman says, as it’s easier for them to standardize procedures.
Some have advocated in favor of more radical antitrust action: breaking up big tech firms. The argument is that platforms such as Meta or Alphabet, by owning digital marketplaces but also utilizing user data to generate ad revenue, are “both player and referee,” says Gennie Gebhart, director of activism at the Electronic Frontier Foundation.
But Nielsen notes that lawmakers in some countries are charting paths in the opposite direction. Brazil’s President Jair Bolsonaro, for instance, has banned social media companies from removing certain content. Meanwhile, India’s government has imposed rules on social media companies that have been branded a step toward “digital authoritarianism” by international NGOs. The regulations force social media companies to remove posts that the government says are illegal, including content that threatens “the interests of the sovereignty and integrity of India,” public order, decency, morality, or incitement to an offense.
A global standard for regulation seems elusive for now.
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com