• Tech
  • metaverse

Why Frances Haugen Is ‘Super Scared’ About Facebook’s Metaverse

7 minute read

A version of this article was published in TIME’s newsletter Into the Metaverse. Subscribe for a weekly guide to the future of the Internet.

 

My friend and colleague Billy Perrigo has long written about the perils of Big Tech: how it spreads misinformation, fosters extremist networks worldwide and even threatens democracy. Last month, Billy traveled to Paris to interview the Facebook whistleblower Frances Haugen for the cover story of our Dec. 6 issue. He talked to Haugen about a wide range of topics, including Facebook’s version of the metaverse—but that part of the conversation didn’t make it into the issue. So this week, I’m handing the newsletter over to Billy for more on his conversation with Haugen. —Andrew R. Chow

——

In a glitzy October keynote presentation, Facebook CEO Mark Zuckerberg appeared as a cartoon avatar informing us that someday, the way we interact with our friends, work with our colleagues and relax in our spare time will no longer just happen in real life, nor on 2D social media platforms like Facebook or Instagram. Instead, we will inhabit a 3D virtual world with endless possibilities. The mission of Facebook—now Meta—would be to build that world, Zuckerberg said.

So when I traveled to Paris to interview Frances Haugen in mid-November, the following questions were on my mind: Would Facebook succeed in its quest to shed its toxic brand? Would people be happy to forget about all the revelations of the last five years? Above all: Would people trust Zuckerberg to build a new (virtual) reality that was safe for both its individual users and society at large?

Haugen, of course, is the former Facebook employee who leaked tens of thousands of pages of company documents to the U.S. authorities and the press this fall. The documents showed that Facebook—which renamed itself in the wake of the revelations—knew far more about the harms of its products, especially Facebook and Instagram, than it ever let on in public.

When I asked her about the metaverse, Haugen’s answer focused on the extra forms of surveillance that would be necessary for any meaningful kind of metaverse experience. “I am worried that if companies become metaverse companies, individuals won’t get to consent anymore on whether or not to have Facebook’s sensors—their microphones in their homes,” Haugen told me. “This company—which has already shown it lies to us whenever it’s in its own interests—we’re supposed to put cameras and microphones for them in our homes?” she said.

I also asked Haugen: what kinds of safety risks don’t exist today, but might exist in a future where we live parts of our lives in the metaverse? “I’m super scared,” she said. Then, she launched into a thought experiment:

“So, just imagine this with me. When you go into the metaverse, your avatar is a little more handsome or pretty than yourself. You have better clothes than we have in reality. The apartment is more stylish, more calm. And you take your headset off and you go to brush your teeth at the end of the night. And maybe you just don’t like yourself in the mirror as much. That cycle… I’m super worried that people are going to look at their apartment, which isn’t as nice, and look at their face or their body, which isn’t as nice, and say: ‘I would rather have my headset on.’ And I haven’t heard Facebook articulate any plan on what to do about that.”

Haugen’s answer vividly took me back to one of the documents that made up part of her whistleblower disclosures. It was about the impact of Instagram on the mental health of teenagers, particularly teenage girls. Facebook’s own researchers had run some surveys and found, among other shocking statistics, that of teenage girls who said they experienced bad feelings about their bodies, 32% said Instagram made them feel worse. Among teens who said they had suicidal thoughts, 13% of American users and 6% of British users traced those thoughts to Instagram, the documents (first reported by the Wall Street Journal) showed.

In the wake of the Instagram revelations, Facebook said that overall research has shown social media usage can be positive, as well as detrimental to mental health, and that it is experimenting with new ways to make Instagram safer for vulnerable users. On Dec. 7, Instagram said it would begin “nudging teens toward different topics if they’ve been dwelling on one topic for a long time,” and would soon be launching new tools for parents and guardians to “get them more involved” in what their teenagers do on the platform.

Frances Haugen Time Magazine Cover
Photograph by Christopher Anderson—Magnum Photos for TIME

Two months after the disclosures, it is unclear to what extent Facebook is prepared to admit that its platforms are responsible for real-world harms. “The individual humans are the ones who choose to believe or not believe a thing; they are the ones who choose to share or not share a thing,” said Andrew Bosworth, the Meta executive currently responsible for augmented and virtual reality in an interview with Axios on HBO that aired Sunday.

When the company even acknowledges the harms it contributes to, its fixes are often retrospective and partial. Should the public trust the company will do things differently with its version of the metaverse? I asked Haugen whether she trusted Zuckerberg when he said, in his keynote, that he would build safety into Facebook’s metaverse from the beginning.

“I think the question is not whether or not I believe him or not,” Haugen said. “I believe any technology with that much influence deserves oversight from the public.”

That message cuts to the heart of a systemic problem that has always been true for technology. Technologists build the future. Democratic and civic institutions take time to step in and set limits about what is acceptable in that new world—including protections for vulnerable people, safeguards against misinformation and polarization and restrictions on monopolistic power. Facebook jettisoned its controversial motto “move fast and break things” in 2014, but the phrase still neatly encapsulates how the company—and the wider industry—works when introducing new tech to the world.

But Haugen has some solutions for the metaverse that could enable regulators to balance the pros of technological innovation with the cons of societal damage. “At a minimum, they should have to tell us what the real harms are,” Haugen says, referring to obligations she would like to see regulators set on Facebook. “They should also have to listen to us when we articulate harms. And we should have a process where we can have a conversation in a structured way.”

But regulation is only one part of the puzzle. A separate and more intractable problem is the issue of prioritization, which can only ever really be set at the executive level, both in terms of investment and company culture. For years, Facebook has been saying that fixing its platform is its number one priority.

But in his keynote, Zuckerberg said that Facebook’s number one priority was now building the metaverse. He didn’t outwardly articulate that safety had been downgraded, but you only have to look at the numbers to see where the problem lies. Recently, Facebook has been rebutting journalists’ questions with the statistic that it now spends $5 billion per year on keeping its platforms safe. In October, Facebook said it would spend at least $10 billion on the metaverse in 2021, a number that it said would only increase in coming years.

You do the math.

Subscribe to Into the Metaverse for a weekly guide to the future of the Internet.

Join TIMEPieces on Twitter and Discord

More Must-Reads From TIME

Write to Billy Perrigo at billy.perrigo@time.com