TIME technology

What Sci-Fi Got Wrong About the Apple Watch: Vanity

Apple

Janet Vertesi is Assistant Professor of Sociology at Princeton University.

What we buy tells others who we are

James Bond had one. So did Maxwell Smart and Captain Kirk. Science fiction is littered with examples heroes and villains barking orders into their wrists or pressing the right combination of tiny buttons to save the day. These science-fiction smartwatches may have mastered communication in space, teleportation physics, and opened the door to the bad guy’s lair. But the Apple Watch, released today, reminds us that there is one thing they all got wrong: the role of conspicuous consumption.

Conspicuous consumption is the idea that our purchasing decisions are not about necessity, but instead are part of how we manage our identity. We all like to buy products with messages that say something about our preferences and ideals. But sociologists will also tell you that managing other people’s impressions of us can be equally important. So while shopping at Whole Foods makes you feel like you are eating healthily, supporting the environment, and participating in an upper-middle class lifestyle, carrying a Whole Foods bag down the street makes others think those things about you. That can make all the difference.

I was thinking about this when I visited an Apple store last week. The Apple Watches were on display in a well-lit table like a jewelry case. Salespeople were encouraging visitors to try them on and place their pre-order. I played with a demo watch, too. In terms of functionality, it doesn’t do much that my Pebble smartwatch doesn’t already do. Why go to Whole Foods, I wondered, when the corner grocery down the street has equally good produce at half the price?

Then I overheard a young family at a display case nearby. A boy of about 11 was pointing to the different models. “That’s the $12,000 one,” he declared, while his family nodded. They seemed less impressed with the fact that a watch cost $12,000 than they were with the fact that their son recognized it on sight.

That is how conspicuous consumption works. As the Apple Watches ship across the United States, you, too, will start to notice not just who has one, but which one they have. Is it the “sport” version or the “gold” version? They all work the same way, but they have different implications for how we think about each other.

Skeptics will say that moving into this marketplace required a luxury pricetag. After all, watches are already the classic conspicuous consumption product. No wonder Apple hired Angela Ahrendts, the chief executive officer who transformed Burberry into a global luxury brand, as senior vice president. Last year, it also poached a vice president from Tag Heuer, a Swiss company that knows a thing or two about high-class watches.

We also know that iPhone ownership is more closely associated with socioeconomic position in the United States. Apple devices in general have been accused of using high price points to sell computers, phones, and tablets that look and feel luxurious, despite the technical superiority of other models. Apple says that charging more for hardware means that they don’t need to sell user data for profit. This is a laudable practice as long as it lasts, but it still means that in order to opt out, you need to be able to afford an Apple machine.

When people have been buying thousand-dollar Rolexes for years, why does it matter which watch we choose? According to anthropologist Laura Watts, unlike most watches, the rhetoric associated with smartwatches is extremely powerful. As more and more technology companies move into the wearable market, we are told that they will revolutionize the world, open up new horizons to the underclasses, transform health care, and infuse our lagging economies with new vitality.

In reality, consumer-driven device adoption can only feed a cycle of inequality. If the future is here, who can afford it? If only some of us can, then whose future is it?

What our science fiction got wrong is that these new devices that power the future would not be sunken into the background, hardworking and unremarkable. They would be consumer technologies. Their very ownership would say something about the person or company that bought them.

It could say, “I’m an Apple person” or “I’m an Android person.” It could say, “I’m sporty” or “I believe in open-source systems.” It could also say, “I can afford to live in the future; how about you?”

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Environment

Stop Counting on Individuals To Solve California’s Water Crisis

A sprinkler waters a lawn on April 7, 2015 in Walnut Creek, California.
Justin Sullivan—Getty Images A sprinkler waters a lawn on April 7, 2015 in Walnut Creek, California.

Janet Vertesi is Assistant Professor of Sociology at Princeton University.

"Even if 50,000 people shorten their showers, this is a drop in the proverbial bucket"

The drought in California has revealed more than just dry lake beds. As NASA issues dire warnings about the prospects for water in the state, Californians are cutting down on their showers, draining their swimming pools, and berating their neighbors for their neatly manicured lawns. Most distressing about the current crisis, though, is what it has revealed about the assumptions that underlie our environmental policies and technologies. Many believe that saving water starts at home, but in this case it’s not residential consumers but large agricultural customers who use most of the water provided by state utilities.

We like to think of individual consumers as rational economic actors, whose choices to consume (or not) affect the pricing and availability of a product through market forces like supply and demand. So deeply ingrained in us is this notion that it appears to be common sense for Governor Jerry Brown to impose restrictions on residential water customers. We are therefore currently witnessing an unprecedented interest in water and the idea that individuals matter. Remember when we used to count our carbon footprint? Now Californians count gallons of water. They’re actively discussing the benefits and drawbacks of “water-hungry” legumes, meats and vegetables online. It is already a faux-pas to serve almonds at parties.

Our environmental technologies also reflect this simplistic individualist ideal about conservation. The Nest thermostat not only manages your home heating and cooling systems, it also tells you how much energy you’ve saved and encourages you to save more. Airlines offer online carbon counters so you can offset your upcoming flight, while new cars such as Prius or Leaf allow you to drive with lower contributions to the CO2 in the atmosphere. It’s probably not long until we’ll see apps that can show you exactly how much water you are consuming.

The idea that individual consumption choices can make a difference is nice in theory. But individual efforts to “go green” pale in comparison to the effects of large-scale systems on our air quality, our water availability, or our climate. The enormous agricultural systems in California require so much water because they are busy feeding the rest of the United States and exporting food to other parts of the world. Even if 50,000 people shorten their showers, this is a drop in the proverbial bucket.

Like any infrastructural change, it is expensive to require individual citizens to opt out. It can take years for the added cost of a hybrid vehicle to make the reduced cost of a tank of gas worthwhile. Further, because companies negotiate discounted costs for their access and use, their choices to consume are sheltered from mainstream market pricing. Even when supply is limited, demand can remain high due to this price-fixing. So a market-based approach aimed just at individual consumers is doomed to failure.

To be sure, existing theories of “efficiency generation” suggest that individual consumers do matter. If thousands of customers install energy-efficient light bulbs, environmental economists equate this power savings to the investment of bringing new facilities online. Although the cost of utilities is established differently for residential consumers versus industrial ones, individual habits can and do add up.

But only if we start thinking on a larger scale. That means thinking about how existing systems influence or curtail our actions and possibilities, rethinking how our current large-scale systems intersect with people, companies, and regulatory agencies, and considering new technologies, policies, and pricing structures that can produce system-wide change.

It also means opening new technological opportunities. What if my thermostat didn’t just tell me what I am saving but also what my neighbors are saving, helping me to support and reinforce their choices? What if we expanded the “carbon offset” idea to water-needy companies to help them consume what they needed while giving back elsewhere? What if several different companies that wish to manage certain resources could band together, across distances, accessing and visualizing the data they need to evaluate their changes?

Fortunately, many of our best and brightest are already on location in California. Silicon Valley has long professed its desire to “disrupt” the system and “change the world.” Here is their chance.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME technology

Celebrity Nude Photo Hack Exposes Flaw In How We Think About Privacy and the Cloud

497594927
Daniel Sambraus—Getty Images

We assume our data stored in the cloud is personal, but in reality it is interpersonal

The celebrity photo hacking scandal has once again put personal information privacy in the spotlight. Apple certainly has egg on their face for their failure to implement a simple level of security on their iCloud service. But some commentators, bloggers, and Twitter users are pointing fingers at the victims of the hack for keeping such photos to begin with. Do we all need to be Internet security experts to use an iPhone?

Extreme measures are not the answer. Instead, our current system of data privacy is based on a fundamental flaw. We are all supposed to be solely responsible for our personal information, but at the same time we are all part of a social network of family, friends and services with whom we are expected to share.

Our systems are set up to make us entirely responsible for safeguarding our data. We toggle innumerable privacy settings that are constantly being updated. We navigate many different online services and platforms, each with their own complications. We are told to back up our data to hard disks and clouds, and exhorted to change our passwords regularly.

At the same time, we are expected to share, share, share. Parents post photos of their kids online so that distant grandparents can follow along. Young travellers start blogs to record their adventures for friends back home. Even nude selfies are rarely taken for the subject’s enjoyment but are shared with their loved ones far away. Indeed, hacked actress Mary Elizabeth Winstead described her leaked photos as ones that “I took with my husband.”

A wide variety of technological services – not just iCloud but also Dropbox and Google Drive – have grown around this requirement for social sharing. Social mores have kept up with the trend. Now, grandparents join Facebook with the expectation that they’ll see photos of their grandchildren. To resist posting is to be a bad parent.

When you fail at this delicate balance of privacy and sharing, friends and strangers alike will line up to tell you that it’s your fault. But how can you keep control over a video of your kids when you post it for your parents? How can you stop a friend at a party from syncing a photo of you to iCloud and uploading it to Instagram, or auto-tagging you using facial recognition on Facebook or Picasa? And when a photo that you text to your friend with an iPhone now belongs both to your pal and to Apple, who is to blame when the photo leaks?

Security experts will claim that the solution to the current hack is better password protection. But even two-stage security – systems that ask for a personal fact about you like a birthdate or pet name in addition to a password – fails to account for the fact that most information about us is readily available online. Children’s names, graduation dates and old addresses are only a search term away.

An alternative view is emerging from a study I’m conducting with colleagues at Princeton University and Yahoo! labs. We are learning that our data systems ask us to be individually responsible but fail to account for how and why we share data with each other. They assume our data is personal, when in reality it is interpersonal. We are caught between opting out entirely and managing an impossible number of changing services with finesse. We do all this with our most important relationships at stake.

Let’s call this what it is: data is only “personal” when it is leaked. That’s why blaming the victims doesn’t help. The next time this happens, it won’t be the celebrities who are hacked for their photos: it will be their friends, their parents, their boyfriends and girlfriends. Everyone is vulnerable.

Keeping each other’s data safe is everyone’s responsibility. It’s time that our devices and services live up to it.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy. She is a 2014 recipient of a Yahoo Faculty Research Engagement Program grant to study personal data privacy practices.

TIME technology

The Real Reason You Should Be Worried About That Facebook Experiment

Facebook Holds Its Fourth f8 Developer Conference
Justin Sullivan—Getty Images Facebook CEO Mark Zuckerberg shows off the new Timeline as he delivers a keynote address during the Facebook f8 conference on September 22, 2011 in San Francisco, California.

The private sector and tech companies are increasingly funding what was once independent social science research.

People are up in arms about the recent revelation that Facebook manipulated its users during a psychological study. The study, published in the Proceedings of the National Academy of Sciences and conducted by researchers at Cornell University and Facebook (full disclosure: I know the authors and the article’s editor), showed that people who saw more happy messages in their News Feed were more likely to post happy messages too. The researchers interpreted this as support for the theory of “emotional contagion”: that emotions can spread through online posts and interactions.

Unfortunately for the researchers, the explosive response to their study over social media confirms their findings. Negative emotion can easily go viral.

Why is this study is so controversial? Psychologists have known for years that individuals’ emotions can be influenced by their social surroundings. Sociologists have also shown that people act like their friends or other people around them in order to fit in. Just as no one wants to be a Debbie Downer at a party, posting sad stories online when your friends are posting happy ones seems to be a no-no. If anything, the findings add to a long list of Internet studies that argue against “digital dualism” — the notion that we behave differently online than we do offline — by showing that the online world plays an active role in shaping our social lives and experiences.

If the study’s findings are not controversial, its methods certainly are. Yet whether we like it or not, tech companies experiment with their users in precisely this way all the time. User-interface designers and researchers at places like Google, Facebook and Yahoo regularly tweak the live site’s interface for a subset of visitors to see whether users behave differently in response. While this technique shines new light on user behavior, the overall goal is to bring the company more revenue through more users, clicks or glances at ads. Stories of designers who made their companies millions more dollars in advertising revenue just by altering a single pixel on the homepage are legendary in Silicon Valley.

That’s why any tech company worth its salt has a research department staffed with Ph.D. scientists to analyze their data. That’s also why Facebook is actively hiring and reaching out to social scientists to help it better understand its data and reach new user populations.

Researchers, for their part, are increasingly joining forces with tech companies. There are many reasons to do so. From location check-ins to threaded conversations, from tweets in times of crisis to shared family photos, the reams of data present a fascinating slice of social life in the 21st century. These platforms also provide an unprecedented venue for a natural experiment at scale. With only a few tweaks, and without users’ knowing, researchers can witness which simple changes have tremendous effects.

As a sociologist of technology, I’ve witnessed these changes firsthand. I have grants from Microsoft and Yahoo; Intel funds my colleagues’ students; our graduates staff the labs at Facebook and Google. These collaborations aim to keep Internet research both current and practical.

But there are other reasons social scientists are turning to tech companies. Public money for social-science research is being slashed at the federal level. The congressional committee that oversees the National Science Foundation wants to cut $50 million to $100 million of social, behavioral and economics funding for the next two years (again, full disclosure: I have received NSF funding). A bill called FIRST: Frontiers in Innovation, Research, Science and Technology aims to improve American competitiveness by funding research that supports a U.S. industry advantage. Yet the committee has called specifically for the NSF to stop giving grants to study social media, online behavior or other Internet topics.

Ironically, at precisely the time when American technology companies are looking to social science to help understand their users and improve their business, this research is being denigrated in the House. And at exactly the time when independent research on Internet practices is needed, scholars must turn to companies for both data and funding.

This is a shortsighted move. On the one hand, it means we will train fewer social scientists to rigorously and responsibly answer the new questions posed by big data. But it also pushes basic research about online social life exclusively into the private sector. This leaves the same companies that make the technologies we use to talk, shop and socialize responsible for managing the ethics of online experimentation. No wonder that esoteric questions like informed consent are suddenly headline news.

The recently released study, then, does present reasons to be alarmed, though not for the reasons most of us think. Facebook isn’t manipulating its users any more than usual. But the proposed changes in social-science funding will have a more lasting effect on our lives both online and offline. That should inspire some emotions that are worth sharing.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy

TIME technology

My Experiment Opting Out of Big Data Made Me Look Like a Criminal

The Latest Mobile Apps At The App World Multi-Platform Developer Show
Bloomberg/Getty Images The Facebook Inc. and Twitter Inc. company logos are seen on an advertising sign during the Apps World Multi-Platform Developer Show in London, U.K., on Wednesday, Oct. 23, 2013.

Here's what happened when I tried to hide my pregnancy from the Internet and marketing companies

This week, the President is expected to release a report on big data, the result of a 90-day study that brought together experts and the public to weigh in on the opportunities and pitfalls of the collection and use of personal information in government, academia and industry. Many people say that the solution to this discomfiting level of personal-data collection is simple: if you don’t like it, just opt out. But as my experience shows, it’s not as simple as that. And it may leave you feeling like a criminal.

It all started with a personal experiment to see if I could keep a secret from the bots, trackers, cookies and other data sniffers online that feed the databases that companies use for targeted advertising. As a sociologist of technology, I was launching a study of how people keep their personal information on the Internet, which led me to wonder: Could I go the entire nine months of my pregnancy without letting these companies know that I was expecting?

This is a difficult thing to do, given how hungry marketing companies are to identify pregnant women. Prospective mothers are busy making big purchases and new choices (which diapers? Which bottles?) that will become their patterns for the next several years. In the big-data era of targeted advertising, detection algorithms sniff out potentially pregnant clients based on their shopping and browsing patterns. It’s a lucrative business; according to a report in the Financial Times, identifying a single pregnant woman is worth as much as knowing the age, sex and location of up to 200 people. Some of these systems can even guess which trimester you’re in.

Avoiding this layer of data detectors isn’t a question of checking a box. Last year, many people were shocked by the story of the teenager in Minnesota whose local Target store knew she was expecting before her father did. Based on her in-store purchasing patterns tracked with credit cards and loyalty programs, Target started sending her ads for diapers and baby supplies, effectively outing her to her family. Like the girl in the Target store, I knew that similar systems would infer my status based on my actions. So keeping my secret required new habits, both online and off.

Social media is one of the most pervasive data-collection platforms, so it was obvious that I couldn’t say anything on Facebook or Twitter, or click on baby-related link bait. But social interactions online are not just about what you say but also what others say about you. One tagged photo with a visible bump and the cascade of “Congratulations!” would let the cat out of the bag. So when we phoned our friends and families to tell them the good news, we told them about our experiment, requesting that they not put anything about the pregnancy online.

Social media isn’t the only offender. Many websites and companies, especially baby-related ones, follow you around the Internet. So I downloaded Tor, a private browser that routes your traffic through foreign servers. While it has a reputation for facilitating illicit activities, I used it to visit BabyCenter.com and to look up possible names. And when it came to shopping, I did all my purchasing—from prenatal vitamins to baby gear and maternity wear—in cash. No matter how good the deal, I turned down loyalty-card swipes. I even set up an Amazon.com account tied to an email address hosted on a personal server, delivering to a locker, and paid with gift cards purchased with cash.

It’s been an inconvenient nine months, but the experiment has exposed harsh realities behind the opt-out myth. For example, seven months in, my uncle sent me a Facebook message congratulating me on my pregnancy. My response was downright rude: I deleted the thread and unfriended him immediately. When I emailed to ask why he did it, he explained, “I didn’t put it on your wall.” Another family member who reached out on Facebook chat a few weeks later exclaimed, “I didn’t know that a private message wasn’t private!”

This sleight of hand is intentional. Internet companies hope that users will not only accept the trade-off between “free” services and private information but will also forget that there is a trade-off in the first place. Once those companies have that personal data, users don’t have any control over where it goes or who might have access to it in the future. And unlike the early days of the Internet, in which digital interactions were ephemeral, today’s Internet services have considerable economic incentives to track and remember—indefinitely.

Attempting to opt out forced me into increasingly awkward interactions with my family and friends. But, as I discovered when I tried to buy a stroller, opting out is not only antisocial, but it can appear criminal.

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”

It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.

The myth that users will “vote with their feet” is simply wrong if opting out comes at such a high price. With social, financial and even potentially legal repercussions involved, the barriers for exit are high. This leaves users and consumers with no real choice nor a voice to express our concerns.

No one should have to act like a criminal just to have some privacy from marketers and tech giants. But the data-driven path we are currently on—paved with the heartwarming rhetoric of openness, sharing and connectivity—actually undermines civic values and circumvents checks and balances. The President’s report can’t come soon enough. When it comes to our personal data, we need better choices than either “leave if you don’t like it” or no choice at all. It’s time for a frank public discussion about how to make personal-information privacy not just a series of check boxes but a basic human right, both online and off.

Your browser is out of date. Please update your browser at http://update.microsoft.com