• Ideas
  • Technology

The Case for Protecting AI-Generated Speech With the First Amendment

8 minute read
Ideas

Stengel is the former Editor of TIME, an MSNBC analyst and the author of Information Wars: How We Lost the Global Battle Against Disinformation.

The modern foundation of the free speech clause of the First Amendment is the concept of the marketplace of ideas. The notion comes from John Stuart Mill who first drew the analogy to a market where ideas compete freely with one another and people form their own judgments. The analogy was first noted in Justice Oliver Wendell Holmes’ famous dissent in Abrams v. United States (1919) when he wrote, “The best test of truth is the power of the thought to get itself accepted in the competition of the market.”

This free and open market of ideas is considered vital to the function and preservation of democracy. Even the most noxious of ideas. As Holmes wrote in another famous dissent in United States v. Schwimmer (1929), “If there is any principle of the Constitution that more imperatively calls for attachment than any other, it is the principle of free thought—not free thought for those who agree with us freedom for the thought we hate.”

Until recently, the Supreme Court had not cared much where those thoughts might come from, or whether their source must be human. The text of the First Amendment itself does not specify. “Congress shall make no law…abridging freedom of speech or of the press.” But the 2010 decision in Citizens United v. Federal Election Commission made it clear: it’s not the speaker that matters, but the speech—and that non-human actors like corporations can have free speech rights. Whatever you may think of the decision, Justice Kennedy argues that speech is such an “essential mechanism of democracy” that even the speech of corporations is equal to any other speaker. “The government,” Kennedy wrote, “may not…deprive the public of the right and privilege to determine for itself what speech and speakers are worthy of consideration.” In other words, speech that adds to the market of ideas, whatever its source, contributes to democracy and is worthy of protection. Government, he wrote, should not dictate how or where citizens get their information. Or whether the source of that information is human, a corporation, or, well, an algorithm.

It’s still early days, but there seems no limit to what generative AI can create: stories, poems, essays, children’s books, animated videos, movies, and actual speeches. The future of media and content will be bound up with generative AI in ways that we don’t yet know. AI itself is not human and cannot have constitutional rights, writes Cass Sunstein, just as a vacuum cleaner does not have constitutional rights. But it seems pretty clear that content created by generative AI probably has free speech protections. It is speech. It is speech that is created out of the raw material of human speech. It is created from code made by humans. It certainly contributes to the marketplace of ideas—it may well contribute too much. The modern Court has shown over and over that government cannot restrict speech because of its message, its subject matter, or its content. [See “Police Department v. Mosley”] At the same time, it makes sense that government can restrict any AI speech that is traditionally not protected by the First Amendment: libel, criminal solicitation, false advertising, child pornography, and speech that leads to imminent lawless action. Such speech is unconstitutional, whether it is created by a human, an algorithm, or a toaster. The question is whether government can regulate AI to prevent unconstitutional speech.

Read More: The A-Z of Artificial Intelligence

In the 20th century, the First Amendment was about protecting speech from government. In the 21st century, it will be about protecting the audience, not the speaker. The basic principle of the First Amendment is to protect the right of citizens to hear anything they want to hear. As Tim Wu has noted, direct government censorship of speech today is almost non-existent. For most of its history, the First Amendment was seen as a negative right—a right against coercive or restrictive government action—not a positive right that obliges the government to create a better and more bountiful market. In the late 1920s, the Court began to use the First Amendment to prevent government suppression of anti-government opinions. The idea was that speech was scarce, especially speech that criticized state power, and it needed to be protected from government interference and intimidation. The opposite situation exists today: there is a superabundance of speech, including criticism of the state, and it’s the listener that requires protection not the speaker. It’s never been easier—or cheaper—to have a soapbox; it’s never been harder to get people’s attention. And they are correlated. As the Nobel laureate Herbert Simon famously said, “a wealth of information creates a poverty of attention.”

Perhaps that’s reason enough for a new digital fairness doctrine. The original fairness doctrine which was introduced in 1949 was justified by the limited spectrum of frequencies available to broadcasters. That faded away with cable television and the internet. Now that limited spectrum is human attention. In the modern attention economy, the finite resource of our attention should be what deserves protection. One of the modern challenges of social media is that negative speech can be used to drown out positive speech: the use of troll armies and bots to harass and intimidate speakers is a distortion of the marketplace of ideas. It is the modern form of censorship. It was Justice Louis Brandeis who suggested that the remedy for harmful speech is more speech. But in the age of bots and troll armies and generative AI, more speech is not always a virtue.

Yes, there are dangers of generative AI having free speech protections. Generative AI can create all the bad forms of speech that a human being can do—just much, much faster and at scale. What if the internet trolls at the Internet Research Agency in St. Petersburg in 2016 had had ChatGPT? Would their disinformation have been more effective? Yes, of course. Can generative AI think of better ways to create disinformation and make it more effective? Yes. Can it create even more virulent hate speech? Can it impersonate others online? Sure. Can it tell you how to make a hand-held nuclear weapon? Yes. The open letter signed by dozens of scientists and tech leaders including Elon Musk asking for a pause in the development of artificial intelligence noted, “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?”

But that same question could have been asked of every technological content innovation: the development of the printing press or radio and television, the internet or social media. Or even photo-shop. All created exponentially more speech. The mechanism by which speech is created and delivered is neutral. The tools are not the enemy. The First Amendment says nothing about the quality of speech or the volume. Why not create a generative AI that checks facts and exposes mis-and-disinformation? Isn’t that the more First Amendment-aligned remedy? AI technology can create speech that is fact-checked even as it is created.

In some ways, the more practical and relevant question is not whether generative AI is protected speech but whether the developers of generative AI have any liability for speech that does violate the First Amendment: defamation, false commercial speech, child pornography, and ”true threats.” Section 230 of the Communications and Decency Act of 1996 famously gave social media companies freedom from liability for the content they hosted. It treated them neither as publishers or speakers. But one way to prevent generative AI from creating the excesses of speech we see on social media would be to give the developers of generative AI some legal liability for the content their product creates. Government can legislate that all AI content is watermarked in some fashion and prevent it from creating unconstitutional speech. All of this suggests that we start to think of the free speech clause as what can government do to have a healthier speech environment, and not so much on what the First Amendment proscribes but what it permits. Speech from generative AI is a more difficult value to defend, but that’s exactly when we need to defend our values.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.