The future of the federal law that protects online platforms from liability for content uploaded on their site is up in the air as the Supreme Court is set to hear two cases that could change the internet this week.
The first case, Gonzalez v. Google, which is set to be heard on Tuesday, argues that YouTube’s algorithm helped ISIS post videos and recruit members —making online platforms directly and secondarily liable for the 2015 Paris attacks that killed 130 people, including 23-year-old American college student Nohemi Gonzalez. Gonzalez’s parents and other deceased victims’ families are seeking damages related to the Anti-Terrorism Act.
Oral arguments for Twitter v. Taamneh—a case that makes similar arguments against Google, Twitter, and Facebook—centers around another ISIS terrorist attack that killed 29 people in Istanbul, Turkey, will be heard on Wednesday.
The cases will decide whether online platforms can be held liable for the targeted advertisements or algorithmic content spread on their platforms.
Tech companies argue that Section 230 protects them from these types of lawsuits because it grants them legal immunity from liability over third-party content that is posted on their platform. The case will decide whether platforms can be held liable for spreading harmful content to users through their algorithm.
Here’s what to know about Section 230.
What is Section 230?
Section 230, which passed in 1996, is a part of the Communications Decency Act.
The law explicitly states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” meaning online platforms are not responsible for the content a user may post.
The law allows tech companies to moderate or remove content that is considered egregious. Section 230, however, does not protect sites that violate federal criminal law, or intellectual property law. It also does not protect platforms that create illegal or harmful content.
Because popular sites like Facebook, Twitter and YouTube rely on user-generated content, many people have credited Section 230 for the creation of the internet we now know and love.
As the scale of online platforms has drastically increased over time, with up to 368 million monthly active users on Twitter alone, experts argue that Section 230 helps protect companies that struggle to keep up with the amount of content posted on their platforms from being sued over what users say or do.
More from TIME
What are these cases about?
The Gonzalez family first filed a suit in 2016, alleging that because Google, which owns YouTube, matches and suggests content to users based on their views, the platform recommended ISIS’s content to users, and enabled them to find other videos and accounts owned by ISIS.
Plaintiffs also argued that Google placed paid advertisements on ISIS videos, which meant they shared ad revenue with the terrorist organization. The lawsuit argues that this means that Google has not taken enough action to ensure ISIS remains off the platform. Because of this, the plaintiffs allege that these tech companies are directly liable for “committing acts of international terrorism” and secondarily liable for “conspiring with, and aiding and abetting, ISIS’s acts of international terrorism.”
A federal district court in California dismissed the complaint, saying that Google could not be held responsible for content that was produced by ISIS. The U.S. Court of Appeals for the 9th circuit sided with the district court, but in October, the Supreme Court agreed to hear the case.
In an opposition brief filed to the Supreme Court, Google maintained that a review of the case was not warranted because websites like YouTube could not be held liable as the “publisher or speaker” of the content users created. They add that Google does not have the capacity to screen “all third-party content for illegal or tortious material” and that the company was concerned that “the threat of liability could prompt sweeping restrictions on online activity.”
Major tech companies like Twitter and Meta, which have expressed their support for Google in the case, say that recommendations based on their algorithms allow them to “organize, rank, and display” user content in a way that enhances a user’s experience on the platforms and called the ability to do so “indispensable.”
What is the future of Section 230?
If the court decides in Gonzalez’s favor, the lawsuit will set a precedent for holding tech companies liable for targeted ads or recommendations.
The effects this could have on the internet are not entirely known, though many warn that tech companies would face a host of lawsuits. Corporate giants like Yelp, Reddit, Microsoft, Craigslist, Twitter and Facebook, say that searches for jobs and restaurants could be restricted if platforms can be sued over what users post, according to the Associated Press. And other review sites could even be held liable for defamation if a particular restaurant received bad ratings.
Even dating sites, like Tinder and Match, called Section 230 essential to user experience on the app as they hope to continue providing match recommendations “without having to fear overwhelming litigation,” according to CBS.
How do legislators feel about Section 230?
Conservatives have long criticized Section 230, alleging that it allows social media platforms to censor right-leaning content.
This scrutiny was applied towards platforms like Twitter, which came under fire after it removed a story by the New York Post about Hunter Biden’s laptop. Twitter executives later called the action a mistake in a House committee hearing, but many conservatives have claimed this as evidence of bias. Lawmakers also criticized social platforms ban of conspiracy theorist Alex Jones’ Infowars page from their sites in 2018.
Former President Donald Trump made calls to repeal the law, even prompting the Justice Department to release proposed amendments to Section 230 in 2020.
“I’ll just cut to the chase, Big Tech is out to get conservatives,” said Rep. Jim Jordan in a House Judiciary Committee hearing in July 2020. “That’s not a hunch, that’s not a suspicion, that’s a fact.”
Democrats have similarly argued against Section 230, saying that it prevents platforms from being held liable for hate speech and misinformation spread on their sites.
In July 2021, Senators Amy Klobuchar and Ben Ray Lujan introduced a bill that would remove tech companies’ immunity from lawsuits if their algorithms promoted health misinformation.
The White House later called on Congress to revoke Section 230 during a September “listening session” about tech companies’ accountability. And in January, President Joe Biden released an Op-Ed in the Wall Street Journal, asking for bipartisan legislation that would hold tech companies accountable.
“The American tech industry is the most innovative in the world…But like many Americans, I’m concerned about how some in the industry collect, share and exploit our most personal data, deepen extremism and polarization in our country, tilt our economy’s playing field, violate the civil rights of women and minorities, and even put our children at risk,” Biden wrote.
More Must-Reads from TIME
- Where Trump 2.0 Will Differ From 1.0
- How Elon Musk Became a Kingmaker
- The Power—And Limits—of Peer Support
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com