TIME technology

Facebook’s Controversial Experiment: Big Tech Is the New Big Pharma

Facebook Privacy Flaw Exposes Private Photos
The Facebook logo is reflected in the eyeglasses of a user in San Francisco on Dec. 7, 2011. Bloomberg/Getty Images

Research in digital media is turning data science into human subjects research.

It seems nearly everyone is angry about Facebook’s already infamous emotional manipulation study—most recently, European regulators. A number of solid objections to both the study itself and to what it reveals about Facebook have been raised, but Facebook and a few others initially defended the study by saying such experiments in user-experience design are something they do all the time, just ordinary industry R&D.

I never thought I’d say this, but Facebook is technically correct. They actually are shaping and manipulating your experiences on the site all the time. Facebook’s very design encourages sharing positive emotions more than negative ones, and its mysterious algorithms pick and choose who gets to see what content. But it’s not just Facebook. Any and every platform on the Web that uses algorithms is also manipulating what you see on their sites (a more generous term might be “automagically curating”), and most platforms aren’t transparent about what they’re doing or how they’re doing it. The truth is that you’ve been a lab rat for at least as long as you’ve used online media. You just didn’t notice before.

But does the mere fact that such practices are commonplace make them right? And to what standards should we hold people who do research on social media? Should research subjects really have fewer rights simply because a corporation has a profit motive?

In truth there’s nothing particularly remarkable about the emotional contagion study, other than the fact a couple of academics were involved and someone decided to call the study “science” by publishing it in the journal PNAS (Proceedings of the National Academy of Sciences). Yet, we haven’t always given corporate research free rein, especially when it comes to covert manipulation.

By academic research standards, however, the emotional contagion study falls short. Facebook failed to get informed consent from participants in the study, and some have argued it risked pushing emotionally vulnerable users over the edge by making them more sad without debriefing them after the fact. Academic research on human subjects usually has to be approved by the host institution’s institutional review board (IRB), a group of people from various disciplines who evaluate research proposals before studies take place. This is to ensure the safety of the people who participate in the studies, and most prestigious scientific journals will only publish papers about studies that were IRB approved.

It looks now as though the emotional contagion study may not have been evaluated by an IRB. PNAS has said it decided to trust whatever IRB had initially approved the study, but Cornell’s IRB says it ruled the study exempt from its oversight because, while the two Cornell researchers helped to design the experiment and to analyze the results, they were not directly involved in collecting any data. This means all ethical oversight for the experiment goes back to Facebook, which is rather like leaving the proverbial fox to guard the henhouse.

And yet, IRBs aren’t a magic bullet. They can be more interested in avoiding lawsuits than in protecting research subjects. Their evaluation criteria are tailored to medical and biological research, and are therefore poorly suited to research in the social sciences. Even the most ethical of researchers will admit that IRB review can be tedious and time-consuming. Only (some) research that involves human subjects needs IRB approval, and one could argue that digital media platforms’ experiments are data science rather than human subjects research.

The Facebook study therefore spans a few different categories: is it human subjects research or is it data science? Is it scientific/academic research (and therefore potentially subject to IRB oversight) or is it corporate/marketing research (and therefore anything goes)? These distinctions, however, are part of the problem—and perhaps a bigger problem than the study itself.

Corporate research and academic research have been blurring together for some time. Before its regulatory tangle with the FDA, for instance, the direct-to-consumer genetic testing cum social networking company 23andMe was able to publish findings in the academic journal Public Library of Science (though gaining acceptance for publication was an involved process, largely because PLoS was more careful about 23andMe’s IRB review than PNAS was about Facebook’s). 23andMe is now seeking to reenter medical research by collaborating with academic researchers; Twitter is offering “data grants” to research institutions; Facebook is seeking sociologists, though it wants the “digital demography” sort rather than the “critiques of power” sort; Snapchat actually hired a critique-and-theory sociologist. Pharmaceutical companies hire university hospitals to conduct clinical trials, and—as government and foundation funding for research dwindles, especially in the social sciences—universities and their departments increasingly depend on corporate funding to stay afloat.

Nor is the line between “person” and “data” so clear anymore. Digital technologies have become so integrated into how we experience the world that my colleague PJ Rey and I argue the digital social technologies we use count as part of ourselves. This means that, even if the Facebook study is data science, it is also human subjects research—even if Facebook never experimented directly on people’s physical bodies.

We need to create new basic standards for social and behavioral research, and these standards must apply equally to corporations and institutions, to market researchers and academic researchers, to data scientists and social scientists alike. At minimum, these standards should include informed consent the way I learned it in my graduate training as a sociologist, rather than as an aside buried in an undecipherable Terms of Service agreement that few people even attempt to read. (Note the participation-rate success of 23andMe’s research arm, which used to be called “23andWe.” Whatever else one might say about the venture, 23andWe did demonstrate not only that people like to be asked for consent, but that given an opportunity to contribute to “science,” quite a number of folks will opt in and volunteer to share their data.)

We also need to design a new review process that can more readily accommodate both social scientific research methods and the realities of life in the increasingly digitized 21st Century, and this process must be both transparent to the public and not unduly cumbersome to researchers. Facebook has since apologized (sort of) for the emotional contagion study, and has said it will change how it handles research in the future. But if the end result of the blowup over the study is that corporations’ social and behavioral research retreats further into secrecy and away from independent oversight of any kind, then everyone loses.

Whitney Erin Boesel is a Fellow at the Berkman Center for Internet & Society at Harvard University, a Visiting Scholar at the MIT Center for Civic Media, and a PhD student in Sociology at the University of California, Santa Cruz. She’s active on Twitter as @weboesel.

TIME technology

Google Glass Doesn’t Have a Privacy Problem. You Do.

2013 Google Developer Conference Continues In San Francisco
2013 Google Developer Conference, May 17, 2013 in San Francisco, California. Justin Sullivan—Getty Images

Blaming a particular device is a lot easier than confronting, for example, America's cultural faultlines and problems of power and privilege.

Brace yourself for more Glassholes. Beginning last week, Google started to make available a limited supply of Google Glass to anyone with $1,500 to spare. To be certain, Glass still affords—or makes possible—a whole range of problematic behaviors. There are concerns about distracted driving; about people (probably men) taking photographs of other people (probably women) without consent; about Glassholes serving as foot soldiers in Google’s data-gobbling army, expanding the corporation’s ongoing assault on what we used to call privacy.

These are real issues but not new ones; rather, they are the newest manifestations of much larger long-standing problems. While Glass may make those problems more visible than they were before, hating Glass (or even Glassholes) won’t make the problems go away.

The backlash against Glass is a result of what I call foregrounding—when a new technology makes some pre-existing aspect of society more visible and, in so doing, is mistaken for having caused the phenomenon in question rather than having brought it to increased attention. Social scientists have long been aware that this happens and have described it with many different names. I like to use foregrounding, however, because it emphasizes that while what we’re seeing may be more salient than it was before, the phenomenon has been there all along.

Contentious or unsettling new technologies like Glass are therefore important signals and can draw our attention toward issues that we as a society need to address. Consider “creepshots”—suggestive photographs taken without the knowledge or consent of the people pictured (who are almost always women). It may well be easier to take a creepshot with Glass than with a smartphone, but Glass doesn’t cause creepshots. Creepshots were a problem before the advent of face-mounted computers, and they will continue to be a problem until we deal with the much bigger problem of why anyone would take one in the first place.

That “why” is an incredibly complicated tangle of issues, and blaming one particular device is a lot easier than confronting, for example, American culture’s deeply entrenched sexism and misogyny. But banning Glass will not stop creepshots, because creeps will take creepshots even if they can’t be Glassholes. Even smashing every camera on earth won’t solve problems that, at their core, are problems of power and privilege.

Similarly, banning Glass will not change the fact that Google—and Facebook, the National Security Agency and a whole range of other actors—are collecting vast troves of information about nearly all of us, nearly all of the time. Moreover, Glass is far from the only product or service to make information about people other than its users available to a corporation. That your privacy is affected by the products other people use may be more “in your face” with Glass, but again: this is already happening, and it will continue to happen until we address the much larger issues that surround data production, access, ownership and use.

As I’ve written elsewhere, we are caught between two conflicting privacy paradigms—but neither paradigm takes into account the fact that our privacy, or lack thereof, is not a product of our individual choices. Until we reconceptualize privacy on both the social and policy levels, the “privacy problem” will persist.

It’s doubtful, of course, that Glass will catch on in any meaningful way; as ever and always, “Google doesn’t get social.” But Glass is not the first wearable computer or camera out there, nor will it be the last. Samsung is rumored to be working on something like Glass; Apple insiders predict a smartwatch; the Narrative Clip, a wearable “lifelogging” camera, is already in production; and MIT has been working on Wearable Computing for close to 20 years.

We shouldn’t accept these devices as inevitable, but neither should we assume that by eliminating the devices, we can eliminate the cultural fault lines that the devices foreground. Instead, we need to ask hard questions about what kind of a society we want to live in—and then demand that new technologies move us closer to that goal, not further away from it.

Whitney Erin Boesel is a fellow at the Berkman Center for Internet & Society at Harvard University, a visiting scholar at the MIT Center for Civic Media and a Ph.D. student in sociology at the University of California at Santa Cruz. She’s active on Twitter as @weboesel.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser