How to Kill Online Free Speech

6 minute read
Mchangama is the CEO of the Future of Free Speech Project at Vanderbilt University, a Senior Fellow at FIRE and author of Free Speech: A History From Socrates to Social Media
Kosseff is a senior legal fellow at The Future of Free Speech. His next book, Liar in a Crowded Theater: Freedom of Speech in a World of Misinformation, will be published this fall.

On Cinco de Mayo, New York City Mayor Eric Adams’ Facebook account livestreamed celebration, and the comments section turned into a dumpster fire. Many came to vent — often hatefully — about the city’s migrant crisis. One commenter lamented the “destruction” of New York by “all, these unwashed illegals immigrants,” and another told the mayor, “Don´t send your garbage upstate! Keep it down there!”

Anti-immigrant and xenophobic comments are frequent on Adams’ Facebook account. When Adams streamed a celebration of Eid at his official residence featuring several Muslim speakers, a user replied “Stupid. Here to ruin another state. Another non American here to ruin within.”

However offensive to Mexicans and Muslims, these comments are perfectly legal under the First Amendment. And as a public official, Mayor Adams is prohibited by the First Amendment from suppressing specific viewpoints on a public forum such as his official Facebook account. But had Adams been the mayor of Paris, the irate users could potentially have landed Adams in legal limbo.

According to a new landmark decision by the European Court of Human Rights (ECHR), freedom of expression does not immunize public officials from criminal liability if they fail to promptly remove manifestly illegal content (such as “hate speech”) posted on their accounts by followers. The recent decision reveals the censorial route that Europe’s judiciary is choosing, and it provides a cautionary tale as the U.S. Supreme Court faces many challenges to its robust protections for online speech.

In Sanchez v. France, the mayor of a French village, local councilor, and parliamentary candidate for the right-wing Rassemblement National Party used his Facebook account to mock a rival political party’s failing website. Some users commented on the posting, accusing the rival political party of being “allies of the muslims” and describing the city of Nimes as being infested by “Drug trafficking run by the muslims” and where “stones get thrown at cars belonging to ‘white people.’”

Even though one of the comments was quickly deleted by the user and the politician warned his Facebook followers to “be careful with the content of [their] comments,” he was convicted for incitement to religious hatred and fined 3,000 euros. In its decision, the ECHR stressed that due to a politician’s “particular status and position in society,” he or she is more likely to “influence voters, or even to incite them, directly or indirectly, to adopt positions and conduct that may prove unlawful” and therefore politicians must be “all the more vigilant” in policing content.

The ECHR’s decision is deeply antithetical to the egalitarian ideals of online free speech, and it’s likely to skew the public sphere in favor of the powerful and platformed to the disadvantage of the voiceless and marginalized. One of social media’s most empowering aspects is that it gives ordinary citizens and voters the ability to tell their “betters” what they think of them without pleasantries.

If politicians and public officials risk criminal liability for comments made by third party users, they will be strongly incentivized to simply disable comments. This is especially true for the most prominent politicians. Eric Adams,’ for example, has 362,000 Facebook followers, and his posts often attract hundreds of comments, which no single person — even with staff — could reasonably be expected to proactively review for compliance with existing laws, while also serving in office.

This problem is compounded by the fact that the ECHR has never defined “hate speech,” which may include such vague and inherently subjective categories as “insulting” and “hurtful” comments. In 2018, the ECHR even found that a Russian journalist had exceeded the limits of free speech and “stir[red] up a deep-seated and irrational hatred” towards the Russian army by comparing its soldiers to “maniacs” and “murderers.” This decision hasn’t aged well given the horrors inflicted by Russian troops in Ukraine.

Even worse, the ECHR’s decision might actually legitimize the selective and viewpoint-based repression of dissent and political criticism. A politician might, for instance, discriminate systematically against his or her most vociferous online critics by defining their comments as “hate speech” or other forms of illegal content. The result is likely to reduce the social media accounts of politicians and public officials to one-sided public relations platforms where they can spread their messages with little opportunity for public criticism by the people they’re supposed to represent. After all, no politician likes to be mocked and criticized in public.

U.S. protections for political speech are vastly more expansive than those of Europe. Although the First Amendment does not protect certain narrow categories of speech, such as true threats and imminent incitement of lawless action, it does not have a general “hate speech” exception. In 1969, the Supreme Court struck down the conviction of an Ohio Ku Klux Klan leader who organized a rally and cross burning, writing that the law under which he was prosecuted “purports to punish mere advocacy.”

Even if hate speech was not constitutionally protected, Section 230 of the Communications Decency Act immunizes platforms from civil claims and state criminal prosecutions arising from the platforms’ failure to take down (or keep up) user content.

In 1997, the Supreme Court struck down a law that criminalized online indecency, recognizing that the internet is “”a unique and wholly new medium of worldwide human communication.” In the quarter-century since, the Court has consistently rejected attempts to allow lawmakers, officials, and courts to micromanage online speech. This month, the U.S. Supreme Court declined to rule whether Section 230 applies to algorithmic promotion of user content, leaving in place more than a quarter century of broad interpretations of the statute’s immunity.

But the U.S.’s hands-off approach to the internet is increasingly threatened. In the next term, the Supreme Court will likely hear challenges to Florida and Texas laws that restrict the ability of platforms to moderate content. Some states have passed laws that require people to verify their age before using social media, making it far more difficult for people to speak anonymously online. And in the past few years, members of Congress have introduced dozens of proposals to scale back or eliminate Section 230.

The U.S. faces serious online harms and real challenges, and the tech giants that dominate centralized social media platforms are often part of the problem. But new challenges should not cause the nation to deviate from more than a century of robust free speech ideals. The ECHR´s ruling in Sanchez offers a window into a dystopian future of online free speech in America. It is a window that we hope the U.S. will shut for good.

More Must-Reads from TIME

Contact us at

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.