Is it wise to try to regulate social media platforms? Can it even be done? These questions are vexing lawmakers in almost every democracy. And finally–after years of debate–some answers are coming into view.
Before 2016, online regulation was pretty low on the political agenda. That changed after the election of Donald Trump and the Brexit referendum. In each case, the losing side came to believe (with some, but only some, justification) that shady digital forces had been weaponized against them. Today powerful voices on the right also argue for regulation, to stop platforms “censoring” conservative voices.
The basic case for legislative intervention is, in fact, non-partisan. It’s simply that, as more and more of our discourse migrates online, social media platforms are increasingly trusted to draw the borders of free expression. They order, filter and present the world’s information. They set rules about what may be said and who may say it. They approve, and ban, speakers and ideas. And when they do these things, they necessarily apply their own rules, principles, biases, and philosophies. That’s not a criticism—sometimes the right will be aggrieved, sometimes the left – but it does mean that the choice is not between regulating speech and leaving it alone. Speech is already being regulated by platforms.
And they have potent powers of enforcement: to stifle a voice or an idea with a single click, to make an idea disappear or to go viral. The case for regulation does not depend on the (usually simplistic) claim that particular platforms are actually biased one way or another. The issue is rather that they increasingly have the power to influence democratic discourse without appropriate checks and balances. They might make mistakes. They might make decisions that offend the basic norms of a free society. They might inadvertently design systems that harm the democratic process. Just like others in positions of social responsibility—lawyers, doctors, bankers, pilots—those who assume the power to control the speech environment ought to be subject to a degree of oversight. Why are there higher qualifications and standards for a person who runs a pharmacy than for a person who runs a major social platform?
The second, more difficult question is whether it is practicable to regulate social media platforms. There are at least three overlapping challenges.
The first is a deep and justifiable concern about governments becoming too closely involved in the regulation of speech. History shows that even democratic regimes can be tempted to over-censor—in the name of religious orthodoxy, moral propriety, political correctness, national security, public order, or even (with the connivance of their supporters) political expediency. Any sound regime for social media governance must avoid giving too much arbitrary power to the state. In the United States, this is a core constitutional precept.
Scale poses another challenge. Platforms come in different sizes. For modest ones, burdensome regulation would make survival impossible. For larger ones, the challenge lies in their mind-boggling scale. Every day, Facebook hosts billions of new posts. After a British teenager took her own life in 2017—the tragedy that prompted the UK Parliament to review its laws—Facebook and Instagram removed around 35,000 posts relating to self-harm and suicide every day. Even if the rules were clear and the platforms properly incentivized and resourced, mistakes would be inevitable. As Monika Bickert, Facebook’s Head of Global Policy Management, has put it: “A company that reviews a hundred thousand pieces of content per day and maintains a 99 per cent accuracy rate may still have up to a thousand errors.” And even that hypothetical example understates the scale of the challenge.
The final issue is harder still. People cannot agree on what an “ideal” online speech environment would look like. Some goals —like stopping the dissemination of child pornography–—command broad consensus. But others are less clear-cut. Take the problem of online disinformation. There is legitimate debate about whether it is best countered by (a) removing it altogether; (b) preventing algorithms from amplifying it; or (c) simply rebutting it with the truth. There’s no philosophically correct answer here. Reasonable people will disagree. The same goes for questions about how to regulate speech that is incendiary but not unlawful (such as claims that the 2020 US presidential election was “stolen”), speech that is offensive but not unlawful (for example, mocking a religious prophet), and speech that is harmful but not illegal (such as content encouraging young girls to starve their bodies, or quack theories about COVID-19). What’s the proper approach to this kind of speech? Ban it? Suppress it? Rebut it? Ignore it? No policy is universally accepted as correct, even in places with robust free speech norms.
These challenges have led many commentators to conclude that regulation of social media is ultimately futile. But it helps to remember that any new system of regulation would not be aiming at perfection. The realm of speech is inherently chaotic. There will always be controversy. There will always be tumult. There will always be lies and slanders. Especially on social media, where conflict gets more clicks than consensus. Every word of moral outrage is said to increase the rate of retweets by 17 per cent.
Rather than regulatory perfection, we can sensibly aim for a reduction in imperfection. Instead of aiming to prevent all online harm, we can aim for a reduction in the risk of harm. And if we can make incremental gains without causing new harm in the process, that would be progress. The question is not “would this system be perfect?” but “would it be better than what we’ve got?”
So what would a better system look like?
It would start by ranking platforms according to their level of social risk. At the lower end would be modest online spaces like community forums, hobbyist groups and fansites. These should be subject only to minimal regulation, and remain largely immune from liability for the content they host. This is not because small platforms are always pleasant places – many are dens of iniquity – but rather because they are easy to leave and easy to replace, and the harms they generate do not usually spill over into wider society. Added to which, too much regulation could be stifling. At the other end of the scale would be very large, important platforms like Facebook and Twitter. These have the capacity to frame the political agenda, rapidly disseminate content and shape the opinions and behavior of millions of people. They are difficult for users to leave, and for rivals to challenge. They are vital spaces for civic and commercial life. These kinds of platforms need more robust oversight.
Of course, size would not be the only guide to risks—small platforms can pose real social risks if they become hotbeds of extremism, for example–but it would be an important one. The Digital Services Act, adopted by the European Parliament in July, plans to distinguish between “micro or small enterprises” and “very large online platforms” that pose “systemic” risks.
Next, platforms categorised as sufficiently risky should be regulated at the system or design level (as proposed for the UK’s Online Safety Bill, which is currently on ice). Lawmakers might, for example, decide that platforms should have reasonable or proportionate systems in place to reduce the risk of online harassment. Or that platforms should have reasonable or proportionate systems in place to reduce the risk of foreign interference in the political process. These requirements would be backed up by enforcement action: platforms would face sanctions if their systems were inadequate. Serious fines and the possibility of criminal sanction for major misconduct should be on the table. But on the flipside, if platforms’ systems were certified as adequate, they would enjoy a high degree of immunity from lawsuits brought by individual users. Stick and carrot.
This brand of regulation—system-level oversight, graded according to social risk, with emphasis on outcomes—means the regulator would not be expected to interfere with on-the-ground operational decisions. There would be no government “censor” scrutinizing individual moderation decisions or pieces of content. Platforms would be entitled to make mistakes, as long as their overall systems were adequate. And the creative burden would be on the platforms themselves to work out how best to meet the aims that have been democratically set for them. They would be incentivized to come up with new interfaces, new algorithms, perhaps even new business models. That is appropriate. Platforms are better-placed than regulators to understand the workings of their own systems, and we would all benefit if more of their considerable genius was refocused on reducing social harms rather than amplifying them.
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com