In the face of claims that they prioritize profits over people, Facebook and Instagram are under mounting pressure to reform the ways in which their platforms are built to be addictive for users. Experts say this increased scrutiny could signal that the social media industry’s current business model is due for a significant disruption.
Often compared to Big Tobacco for the ways in which their products are addictive and profitable but ultimately unhealthy for users, social media’s biggest players are facing growing calls for both accountability and regulatory action. In order to make money, these platforms’ algorithms effectively function to keep users engaged and scrolling through content, and by extension advertisements, for as long as possible.
“[These companies] aren’t being held accountable for any of their business practices,” says Sherry Fowler, a professor of practice in information technology and business analytics at North Carolina State University. “I think we’re at the same point [with Big Tech] that we were when Big Tobacco was forced to share the research on how its products were harming individuals. There had to be this mass campaign to inform the public because, at that point, a lot of people didn’t even know that tobacco was addictive. Here we have something that’s just as addictive and we’ve allowed the companies not to have to answer to anybody. Certain industries can’t run rampant without any rules being enforced.”
With bipartisan consensus growing that more must be done to combat how social media’s leading platforms drive engagement by employing algorithmic tools that are destructive to individual and societal wellbeing, government intervention seems more inevitable—and imminent—than ever in the year ahead.
Bipartisan lawmakers have already introduced a House bill dubbed the Filter Bubble Transparency Act that would require platforms to offer a version of their services where content is not selected by “opaque algorithms” that draw on personal user data to generate recommendations.
As Republican Rep. Ken Buck, one of the representatives sponsoring the legislation, told Axios, “Consumers should have the option to engage with internet platforms without being manipulated by secret algorithms driven by user-specific data.” But the question remains: Is real change possible?
The dark side of addictive algorithms
While concerns over addictive algorithms extend beyond the two platforms owned by Meta, the company formerly known as Facebook, internal documents—sometimes referred to as the “Facebook Papers”—leaked in recent months by Facebook product manager turned whistleblower Frances Haugen have shone a spotlight on the harmful effects that Facebook and Instagram specifically can have on users, and especially young users.
In her October testimony to a Senate Commerce subcommittee, Haugen said that Facebook’s use of “engagement-based ranking”—an algorithmic system that rewards posts that generate the most likes, comments and shares—and heavy weighting of “meaningful social interactions”—content that generates strong reactions—has resulted in a system that’s amplified divisive content on the platform, fostered hate speech and misinformation, and incited violence.
Read more: Why Frances Haugen Is ‘Super Scared’ About Facebook’s Metaverse
On Instagram, these mechanisms push children and teens to harmful content that can lead to body image issues, mental health crises and bullying. Internal research leaked by Haugen showed that some of the features that play a key role in Instagram’s success and addictive nature, like the Explore page, which serves users curated posts based on their interests, are among the most harmful to young people. “Aspects of Instagram exacerbate each other to create a perfect storm,” one report read.
Meta did not immediately respond to TIME’s request for comment on potential algorithmic changes.
Popular video platforms like TikTok and YouTube have also come under fire for employing algorithmic recommendation systems that can lead viewers down dangerous—and addictive—rabbit holes, with the New York Times reporting in December that a copy of an internal document detailing the four main goals of TikTok’s algorithm was leaked by a source who was “disturbed by the app’s push toward ‘sad’ content that could induce self-harm.”
In response to a request for comment, a TikTok spokesperson pointed TIME to a Dec. 16 Newsroom post on the work the platform is doing to safeguard and diversify its For You feed recommendations.
“As we continue to develop new strategies to interrupt repetitive patterns, we’re looking at how our system can better vary the kinds of content that may be recommended in a sequence,” the post read. “That’s why we’re testing ways to avoid recommending a series of similar content—such as around extreme dieting or fitness, sadness, or breakups—to protect against viewing too much of a content category that may be fine as a single video but problematic if viewed in clusters.”
Continued lack of transparency
Still, one of the major challenges posed by regulating these platforms is the continued lack of transparency surrounding their inner workings.
“We don’t know a ton about just how bad these networks are, in part, because it’s so hard to research them,” says Ethan Zuckerman, an associate professor of public policy, communication and information at the University of Massachusetts Amherst. “We’re relying on Frances Haugen’s leaks rather than doing [independent] research because, in most cases, we can’t get the data that actually tells the story.”
When it comes to hate speech, extremism and misinformation, Zuckerman says the algorithm might not even be the most dangerous part of the problem.
“I tend to think the algorithm part of the story is overhyped,” he says. “There’s a decent amount of research out there on YouTube—which is a lot easier to study than Facebook, for instance—that suggests the algorithm is really only a small part of the equation and the real problem is people who are looking for hateful or extreme speech on the platform and finding it. Similarly, when you look at some of what’s happened around misinformation and disinformation, there’s some pretty good evidence that it’s not necessarily being algorithmically fed to people, but that people are choosing to join groups in which it’s the currency of the day.”
But without the ability to obtain more data, researchers like Zuckerman aren’t able to fully get to the root of these escalating issues.
“The very little bits of information that have been made available by companies like Facebook are intriguing, but they’re sort of just hints of what’s actually going on,” he says. “In some cases, it’s more than just the platform would have to give us more data. It’s that we would actually need the ability to go in and audit these platforms in a meaningful way.”
Referencing the infamous Cambridge Analytica scandal—wherein the political consulting firm harvested the data of at least 87 million Facebook users in order to help elect Donald Trump as president—Zuckerman says that Facebook and other companies rely on user privacy as a defense for not sharing more information.
“These companies claim that they can’t be more transparent without violating user privacy,” he says. “Facebook invokes privacy as a way of preventing [third-party] research from taking place. So the barrier to [evaluating how the algorithm actually works] is Facebook will use privacy as an excuse for why meaningful investigation of the platform can’t take place.”
However, if Congress were to step in and pass new laws addressing these issues, Zuckerman says it could be a catalyst for real change.
“The place where we have the best chance at progress is legislating a certain amount of transparency,” he says. “If we believe as a society that these tools are really powerful and doing damage to us, it makes perfect sense that we would try to audit them so that we can understand what that damage might be.”
A need for congressional intervention
During Instagram CEO Adam Mosseri’s first-ever appearance before Congress on Dec. 8, several members of the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security took the stance that Instagram’s recent efforts to make the platform safer for young users are “too little, too late” and that the time for self-policing without congressional intervention is over.
These remarks seemed to signal that regulatory legislation designed to reign in Facebook, Instagram and other platforms could be on the horizon in the coming year. From Fowler’s perspective, this is the only way that the threats posed by addictive algorithms, as well as other aspects of the Big Tech business model, will begin to be mitigated.
“I’m very doubtful that unless they are compelled by law to do something that they will self-correct,” she says. “We’re not going to be able to do anything without Congress acting. We’ve had so many hearings now and it’s quite obvious the companies aren’t going to police themselves. There aren’t any Big Tech players that can regulate the industry because they’re all in it together and they all work exactly the same way. So we must implement laws.”
The Justice Against Malicious Algorithms Act, a bill introduced by House Democrats in October, would create an amendment in Section 230—a portion of the Communications Decency Act that protects companies from legal liability for content posted on their platforms—that would hold a company responsible when it “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury.”
However, party divisions combined with the fact that it’s a congressional election year gives Fowler pause over the likeliness of whether there will be any tangible progress made in 2022.
“My suggestion is that [politicians] move forward on what they agree on with regard to this topic—specifically in the area of how social media impacts minors,” she says, “and not focus on why they agree, because they have different reasons for coming to a similar conclusion.”
Whether Big Tech giants will ever be able to reach a point where they’re truly prioritizing people over profits remains to be seen, but Zuckerman notes that companies like Facebook don’t have a great track record in that regard.
“Facebook is a phenomenally profitable company. If they care about protecting users from mis- and disinformation, they have enormous amounts of money to invest in it. Something that’s become very clear from the Facebook Papers is that their systems just aren’t very good and probably have not been very heavily invested in. And that’s the problem that comes up again and again and again with them.”
Instead, Zuckerman suggests that a different way of looking at the societal harms caused by social media may be more apt: “At a certain point we have to start asking ourselves the question: Do we want our digital public sphere, the space in which we’re talking about politics and news and the future, to be run by for profit companies? And very lightly regulated for-profit companies at that.”
More Must-Reads from TIME
- Donald Trump Is TIME's 2024 Person of the Year
- Why We Chose Trump as Person of the Year
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- The 20 Best Christmas TV Episodes
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- Merle Bombardieri Is Helping People Make the Baby Decision
Write to Megan McCluskey at megan.mccluskey@time.com