TIME technology

Celebrity Nude Photo Hack Exposes Flaw In How We Think About Privacy and the Cloud

497594927
Daniel Sambraus—Getty Images

We assume our data stored in the cloud is personal, but in reality it is interpersonal

The celebrity photo hacking scandal has once again put personal information privacy in the spotlight. Apple certainly has egg on their face for their failure to implement a simple level of security on their iCloud service. But some commentators, bloggers, and Twitter users are pointing fingers at the victims of the hack for keeping such photos to begin with. Do we all need to be Internet security experts to use an iPhone?

Extreme measures are not the answer. Instead, our current system of data privacy is based on a fundamental flaw. We are all supposed to be solely responsible for our personal information, but at the same time we are all part of a social network of family, friends and services with whom we are expected to share.

Our systems are set up to make us entirely responsible for safeguarding our data. We toggle innumerable privacy settings that are constantly being updated. We navigate many different online services and platforms, each with their own complications. We are told to back up our data to hard disks and clouds, and exhorted to change our passwords regularly.

At the same time, we are expected to share, share, share. Parents post photos of their kids online so that distant grandparents can follow along. Young travellers start blogs to record their adventures for friends back home. Even nude selfies are rarely taken for the subject’s enjoyment but are shared with their loved ones far away. Indeed, hacked actress Mary Elizabeth Winstead described her leaked photos as ones that “I took with my husband.”

A wide variety of technological services – not just iCloud but also Dropbox and Google Drive – have grown around this requirement for social sharing. Social mores have kept up with the trend. Now, grandparents join Facebook with the expectation that they’ll see photos of their grandchildren. To resist posting is to be a bad parent.

When you fail at this delicate balance of privacy and sharing, friends and strangers alike will line up to tell you that it’s your fault. But how can you keep control over a video of your kids when you post it for your parents? How can you stop a friend at a party from syncing a photo of you to iCloud and uploading it to Instagram, or auto-tagging you using facial recognition on Facebook or Picasa? And when a photo that you text to your friend with an iPhone now belongs both to your pal and to Apple, who is to blame when the photo leaks?

Security experts will claim that the solution to the current hack is better password protection. But even two-stage security – systems that ask for a personal fact about you like a birthdate or pet name in addition to a password – fails to account for the fact that most information about us is readily available online. Children’s names, graduation dates and old addresses are only a search term away.

An alternative view is emerging from a study I’m conducting with colleagues at Princeton University and Yahoo! labs. We are learning that our data systems ask us to be individually responsible but fail to account for how and why we share data with each other. They assume our data is personal, when in reality it is interpersonal. We are caught between opting out entirely and managing an impossible number of changing services with finesse. We do all this with our most important relationships at stake.

Let’s call this what it is: data is only “personal” when it is leaked. That’s why blaming the victims doesn’t help. The next time this happens, it won’t be the celebrities who are hacked for their photos: it will be their friends, their parents, their boyfriends and girlfriends. Everyone is vulnerable.

Keeping each other’s data safe is everyone’s responsibility. It’s time that our devices and services live up to it.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy. She is a 2014 recipient of a Yahoo Faculty Research Engagement Program grant to study personal data privacy practices.

TIME technology

The Real Reason You Should Be Worried About That Facebook Experiment

Facebook Holds Its Fourth f8 Developer Conference
Facebook CEO Mark Zuckerberg shows off the new Timeline as he delivers a keynote address during the Facebook f8 conference on September 22, 2011 in San Francisco, California. Justin Sullivan—Getty Images

The private sector and tech companies are increasingly funding what was once independent social science research.

People are up in arms about the recent revelation that Facebook manipulated its users during a psychological study. The study, published in the Proceedings of the National Academy of Sciences and conducted by researchers at Cornell University and Facebook (full disclosure: I know the authors and the article’s editor), showed that people who saw more happy messages in their News Feed were more likely to post happy messages too. The researchers interpreted this as support for the theory of “emotional contagion”: that emotions can spread through online posts and interactions.

Unfortunately for the researchers, the explosive response to their study over social media confirms their findings. Negative emotion can easily go viral.

Why is this study is so controversial? Psychologists have known for years that individuals’ emotions can be influenced by their social surroundings. Sociologists have also shown that people act like their friends or other people around them in order to fit in. Just as no one wants to be a Debbie Downer at a party, posting sad stories online when your friends are posting happy ones seems to be a no-no. If anything, the findings add to a long list of Internet studies that argue against “digital dualism” — the notion that we behave differently online than we do offline — by showing that the online world plays an active role in shaping our social lives and experiences.

If the study’s findings are not controversial, its methods certainly are. Yet whether we like it or not, tech companies experiment with their users in precisely this way all the time. User-interface designers and researchers at places like Google, Facebook and Yahoo regularly tweak the live site’s interface for a subset of visitors to see whether users behave differently in response. While this technique shines new light on user behavior, the overall goal is to bring the company more revenue through more users, clicks or glances at ads. Stories of designers who made their companies millions more dollars in advertising revenue just by altering a single pixel on the homepage are legendary in Silicon Valley.

That’s why any tech company worth its salt has a research department staffed with Ph.D. scientists to analyze their data. That’s also why Facebook is actively hiring and reaching out to social scientists to help it better understand its data and reach new user populations.

Researchers, for their part, are increasingly joining forces with tech companies. There are many reasons to do so. From location check-ins to threaded conversations, from tweets in times of crisis to shared family photos, the reams of data present a fascinating slice of social life in the 21st century. These platforms also provide an unprecedented venue for a natural experiment at scale. With only a few tweaks, and without users’ knowing, researchers can witness which simple changes have tremendous effects.

As a sociologist of technology, I’ve witnessed these changes firsthand. I have grants from Microsoft and Yahoo; Intel funds my colleagues’ students; our graduates staff the labs at Facebook and Google. These collaborations aim to keep Internet research both current and practical.

But there are other reasons social scientists are turning to tech companies. Public money for social-science research is being slashed at the federal level. The congressional committee that oversees the National Science Foundation wants to cut $50 million to $100 million of social, behavioral and economics funding for the next two years (again, full disclosure: I have received NSF funding). A bill called FIRST: Frontiers in Innovation, Research, Science and Technology aims to improve American competitiveness by funding research that supports a U.S. industry advantage. Yet the committee has called specifically for the NSF to stop giving grants to study social media, online behavior or other Internet topics.

Ironically, at precisely the time when American technology companies are looking to social science to help understand their users and improve their business, this research is being denigrated in the House. And at exactly the time when independent research on Internet practices is needed, scholars must turn to companies for both data and funding.

This is a shortsighted move. On the one hand, it means we will train fewer social scientists to rigorously and responsibly answer the new questions posed by big data. But it also pushes basic research about online social life exclusively into the private sector. This leaves the same companies that make the technologies we use to talk, shop and socialize responsible for managing the ethics of online experimentation. No wonder that esoteric questions like informed consent are suddenly headline news.

The recently released study, then, does present reasons to be alarmed, though not for the reasons most of us think. Facebook isn’t manipulating its users any more than usual. But the proposed changes in social-science funding will have a more lasting effect on our lives both online and offline. That should inspire some emotions that are worth sharing.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy

TIME technology

My Experiment Opting Out of Big Data Made Me Look Like a Criminal

The Latest Mobile Apps At The App World Multi-Platform Developer Show
The Facebook Inc. and Twitter Inc. company logos are seen on an advertising sign during the Apps World Multi-Platform Developer Show in London, U.K., on Wednesday, Oct. 23, 2013. Bloomberg/Getty Images

Here's what happened when I tried to hide my pregnancy from the Internet and marketing companies

This week, the President is expected to release a report on big data, the result of a 90-day study that brought together experts and the public to weigh in on the opportunities and pitfalls of the collection and use of personal information in government, academia and industry. Many people say that the solution to this discomfiting level of personal-data collection is simple: if you don’t like it, just opt out. But as my experience shows, it’s not as simple as that. And it may leave you feeling like a criminal.

It all started with a personal experiment to see if I could keep a secret from the bots, trackers, cookies and other data sniffers online that feed the databases that companies use for targeted advertising. As a sociologist of technology, I was launching a study of how people keep their personal information on the Internet, which led me to wonder: Could I go the entire nine months of my pregnancy without letting these companies know that I was expecting?

This is a difficult thing to do, given how hungry marketing companies are to identify pregnant women. Prospective mothers are busy making big purchases and new choices (which diapers? Which bottles?) that will become their patterns for the next several years. In the big-data era of targeted advertising, detection algorithms sniff out potentially pregnant clients based on their shopping and browsing patterns. It’s a lucrative business; according to a report in the Financial Times, identifying a single pregnant woman is worth as much as knowing the age, sex and location of up to 200 people. Some of these systems can even guess which trimester you’re in.

Avoiding this layer of data detectors isn’t a question of checking a box. Last year, many people were shocked by the story of the teenager in Minnesota whose local Target store knew she was expecting before her father did. Based on her in-store purchasing patterns tracked with credit cards and loyalty programs, Target started sending her ads for diapers and baby supplies, effectively outing her to her family. Like the girl in the Target store, I knew that similar systems would infer my status based on my actions. So keeping my secret required new habits, both online and off.

Social media is one of the most pervasive data-collection platforms, so it was obvious that I couldn’t say anything on Facebook or Twitter, or click on baby-related link bait. But social interactions online are not just about what you say but also what others say about you. One tagged photo with a visible bump and the cascade of “Congratulations!” would let the cat out of the bag. So when we phoned our friends and families to tell them the good news, we told them about our experiment, requesting that they not put anything about the pregnancy online.

Social media isn’t the only offender. Many websites and companies, especially baby-related ones, follow you around the Internet. So I downloaded Tor, a private browser that routes your traffic through foreign servers. While it has a reputation for facilitating illicit activities, I used it to visit BabyCenter.com and to look up possible names. And when it came to shopping, I did all my purchasing—from prenatal vitamins to baby gear and maternity wear—in cash. No matter how good the deal, I turned down loyalty-card swipes. I even set up an Amazon.com account tied to an email address hosted on a personal server, delivering to a locker, and paid with gift cards purchased with cash.

It’s been an inconvenient nine months, but the experiment has exposed harsh realities behind the opt-out myth. For example, seven months in, my uncle sent me a Facebook message congratulating me on my pregnancy. My response was downright rude: I deleted the thread and unfriended him immediately. When I emailed to ask why he did it, he explained, “I didn’t put it on your wall.” Another family member who reached out on Facebook chat a few weeks later exclaimed, “I didn’t know that a private message wasn’t private!”

This sleight of hand is intentional. Internet companies hope that users will not only accept the trade-off between “free” services and private information but will also forget that there is a trade-off in the first place. Once those companies have that personal data, users don’t have any control over where it goes or who might have access to it in the future. And unlike the early days of the Internet, in which digital interactions were ephemeral, today’s Internet services have considerable economic incentives to track and remember—indefinitely.

Attempting to opt out forced me into increasingly awkward interactions with my family and friends. But, as I discovered when I tried to buy a stroller, opting out is not only antisocial, but it can appear criminal.

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”

It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.

The myth that users will “vote with their feet” is simply wrong if opting out comes at such a high price. With social, financial and even potentially legal repercussions involved, the barriers for exit are high. This leaves users and consumers with no real choice nor a voice to express our concerns.

No one should have to act like a criminal just to have some privacy from marketers and tech giants. But the data-driven path we are currently on—paved with the heartwarming rhetoric of openness, sharing and connectivity—actually undermines civic values and circumvents checks and balances. The President’s report can’t come soon enough. When it comes to our personal data, we need better choices than either “leave if you don’t like it” or no choice at all. It’s time for a frank public discussion about how to make personal-information privacy not just a series of check boxes but a basic human right, both online and off.

Your browser, Internet Explorer 8 or below, is out of date. It has known security flaws and may not display all features of this and other websites.

Learn how to update your browser