TIME Security

The Problem With Police Body Cameras

Brian Gurule, a Colorado Springs motor officer poses with a Digital Ally First Vu HD body worn camera worn on his chest in Colorado Springs on April 21, 2015.
Rick Wilking—Reuters Brian Gurule, a Colorado Springs motor officer poses with a Digital Ally First Vu HD body worn camera worn on his chest in Colorado Springs on April 21, 2015.

Janet Vertesi is Assistant Professor of Sociology at Princeton University.

In court, video evidence never speaks for itself

Correction appended, May 4, 2015.

The recent events in Baltimore have once again opened the question of mandatory police body cameras as a protective measure against forms of civic violence. Civil rights organizations such as the ACLU argue that visual evidence will make it easier to prosecute in cases of police brutality, while some police departments have welcomed the opportunity to review visual data for accountability programs and internal investigations. Even Hillary Clinton and President Barack Obama are on board. The Department of Justice announced Friday that it’s giving $20 million to police departments to buy body cameras as part of a three-year $75 million program.

But as history tells us, camera evidence does not an indictment make. In my 15 years of studying how experts work with images, it has become clear that the evidence never “speaks for itself.” Like words, images are open to interpretation.

In 1991, a witness recorded the beating of Rodney King by police officers in Los Angeles and the video was presented in a high-profile court case that gripped the nation. King was African American; the officers were white. Despite video evidence of what appeared to be brutality, the police officers were acquitted. Angry citizens took to the streets, prompting intervention from the National Guard.

UCLA professor Charles Goodwin describes how lawyers for and against King’s case interpreted the video for the jury in very different ways. When King jerked on the ground in apparent response to the beating, the LAPD expert witnesses described his movements as continued aggression and resisting arrest. It was no longer a slam-dunk case.

This confusion is not limited to video evidence. According to Jennifer Tucker, professor of history of science at Wesleyan University, photographs were also not believed when they were first presented as evidence in legal trials or scientific experiments. While some argued that cameras were impassive observers and therefore trustworthy recorders of objective evidence, everyone also knew that photographs could lie. Long before dating profile selfies, wispy visual effects and tricks of the light could place ghosts into family portraits or monsters in the nearby lake.

Because of this, people use sketches, image processing, diagrams and expert narration to get others to see what they see. In my book about the Mars Rover mission, I describe how scientists use these techniques to show their colleagues which minerals to look out for. I’ve also seen amateurs use the same techniques to show a face, a sasquatch, or the Virgin Mary on Mars.

This kind of visual suggestibility is powerful. Just like the first time you saw the rabbit in the duck-rabbit or the old woman instead of the young woman in those old gestalt images, once a new interpretation has been seen it cannot be unseen. This is problematic when images are supposed to provide evidence that speaks for itself. Further, when there are power differences between the groups who offer these competing interpretations, there are real-world implications for justice.

We have already seen such competing interpretations over video evidence of recent deaths in the news. In the video of Eric Garner’s apparent chokehold, experts described what was witnessed not as excessive force that broke the law, but a justified and trained response to Garner’s resisting arrest (others described this perceived “resistance” as a reflex while choking). Even a recent video from Baltimore of a woman slapping her son for participating in the riots has gone viral, with some upholding it as evidence of fear of police retaliation and others applauding her for disciplining her child.

All this points to a problem with body cams as a technological “quick fix.” It is not just that video evidence can be interpreted differently. It is that, like other evidential technologies before them, from photographs to fingerprinting and even DNA, body cams will enter into a social system involving courts, police departments and civil rights organizations that already are at loggerheads about the interpretation of police actions. As these communities endeavor to make a video “speak for itself,” they will inevitably speak for it, imposing competing interpretations and introducing uncertainty instead of proof.

Body cams are not a panacea. To be truly effective in the courts of law, they will require thoughtful legal parameters concerning the admission, interpretation, and power of video evidence: this in addition to considering the system-level changes and privacy protections that experts suggest. Otherwise, like other technologies before them, their “evidence” will get swept up into the continuing battles of one side versus another. And whose interpretation wins out will always say more about who is in power than who is in the right.

Correction: The original version of this story misstated Jennifer Tucker’s affiliation. She is a professor of history at Wesleyan University.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME technology

What Sci-Fi Got Wrong About the Apple Watch: Vanity

Apple

Janet Vertesi is Assistant Professor of Sociology at Princeton University.

What we buy tells others who we are

James Bond had one. So did Maxwell Smart and Captain Kirk. Science fiction is littered with examples of heroes and villains barking orders into their wrists or pressing the right combination of tiny buttons to save the day. These science-fiction smart watches may have mastered communication in space and teleportation physics, and opened the door to the bad guy’s lair. But the Apple Watch, released today, reminds us that there is one thing they all got wrong: the role of conspicuous consumption.

Conspicuous consumption is the idea that our purchasing decisions are not about necessity but instead are part of how we manage our identity. We all like to buy products with messages that say something about our preferences and ideals. But sociologists will also tell you that managing other people’s impressions of us can be equally important. So while shopping at Whole Foods makes you feel as if you are eating healthily, supporting the environment and participating in an upper-middle-class lifestyle, carrying a Whole Foods bag down the street makes others think those things about you. That can make all the difference.

I was thinking about this when I visited an Apple store last week. The Apple Watches were on display in a well-lit table like a jewelry case. Salespeople were encouraging visitors to try them on and place preorders. I played with a demo watch too. In terms of functionality, it doesn’t do much that my Pebble smart watch doesn’t already do. Why go to Whole Foods, I wondered, when the corner grocery down the street has equally good produce at half the price?

Then I overheard a young family at a display case nearby. A boy of about 11 was pointing to the different models. “That’s the $12,000 one,” he declared, while his family nodded. They seemed less impressed with the fact that a watch cost $12,000 than they were with the fact that their son recognized it on sight.

That is how conspicuous consumption works. As the Apple Watches ship across the U.S., you too will start to notice not just who has one but which one they have. Is it the “sport” version or the “gold” version? They all work the same way, but they have different implications for how we think about one another.

Skeptics will say that moving into this marketplace required a luxury price tag. After all, watches are already the classic conspicuous-consumption product. No wonder Apple hired Angela Ahrendts, the chief executive officer who transformed Burberry into a global luxury brand, as senior vice president. Last year it poached a vice president from Tag Heuer, a Swiss company that knows a thing or two about high-class watches.

We also know that iPhone ownership is more closely associated with socioeconomic position in the U.S. Apple devices in general have been accused of using high price points to sell computers, phones and tablets that look and feel luxurious, despite the technical superiority of other models. Apple says charging more for hardware means that it doesn’t need to sell user data for profit. This is a laudable practice as long as it lasts, but it still means that in order to opt out, you need to be able to afford an Apple machine.

Since people have been buying $1,000 Rolexes for years, why does it matter which watch we choose? According to anthropologist Laura Watts, the rhetoric associated with smart watches, unlike most other watches, is extremely powerful. As more and more technology companies move into the wearable market, we are told that they will revolutionize the world, open up new horizons to the underclasses, transform health care and infuse our lagging economies with new vitality.

In reality, adoption of consumer-driven devices can only feed a cycle of inequality. If the future is here, who can afford it? If only some of us can, then whose future is it?

What our science fiction got wrong is that these new devices that power the future would not be sunk into the background, hardworking and unremarkable. They would be consumer technologies. Their very ownership would say something about the person or company that bought them.

It could say, “I’m an Apple person” or “I’m an Android person.” It could say, “I’m sporty” or “I believe in open-source systems.” It could also say, “I can afford to live in the future; how about you?”

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Environment

Stop Counting on Individuals To Solve California’s Water Crisis

A sprinkler waters a lawn on April 7, 2015 in Walnut Creek, California.
Justin Sullivan—Getty Images A sprinkler waters a lawn on April 7, 2015 in Walnut Creek, California.

Janet Vertesi is Assistant Professor of Sociology at Princeton University.

"Even if 50,000 people shorten their showers, this is a drop in the proverbial bucket"

The drought in California has revealed more than just dry lake beds. As NASA issues dire warnings about the prospects for water in the state, Californians are cutting down on their showers, draining their swimming pools, and berating their neighbors for their neatly manicured lawns. Most distressing about the current crisis, though, is what it has revealed about the assumptions that underlie our environmental policies and technologies. Many believe that saving water starts at home, but in this case it’s not residential consumers but large agricultural customers who use most of the water provided by state utilities.

We like to think of individual consumers as rational economic actors, whose choices to consume (or not) affect the pricing and availability of a product through market forces like supply and demand. So deeply ingrained in us is this notion that it appears to be common sense for Governor Jerry Brown to impose restrictions on residential water customers. We are therefore currently witnessing an unprecedented interest in water and the idea that individuals matter. Remember when we used to count our carbon footprint? Now Californians count gallons of water. They’re actively discussing the benefits and drawbacks of “water-hungry” legumes, meats and vegetables online. It is already a faux-pas to serve almonds at parties.

Our environmental technologies also reflect this simplistic individualist ideal about conservation. The Nest thermostat not only manages your home heating and cooling systems, it also tells you how much energy you’ve saved and encourages you to save more. Airlines offer online carbon counters so you can offset your upcoming flight, while new cars such as Prius or Leaf allow you to drive with lower contributions to the CO2 in the atmosphere. It’s probably not long until we’ll see apps that can show you exactly how much water you are consuming.

The idea that individual consumption choices can make a difference is nice in theory. But individual efforts to “go green” pale in comparison to the effects of large-scale systems on our air quality, our water availability, or our climate. The enormous agricultural systems in California require so much water because they are busy feeding the rest of the United States and exporting food to other parts of the world. Even if 50,000 people shorten their showers, this is a drop in the proverbial bucket.

Like any infrastructural change, it is expensive to require individual citizens to opt out. It can take years for the added cost of a hybrid vehicle to make the reduced cost of a tank of gas worthwhile. Further, because companies negotiate discounted costs for their access and use, their choices to consume are sheltered from mainstream market pricing. Even when supply is limited, demand can remain high due to this price-fixing. So a market-based approach aimed just at individual consumers is doomed to failure.

To be sure, existing theories of “efficiency generation” suggest that individual consumers do matter. If thousands of customers install energy-efficient light bulbs, environmental economists equate this power savings to the investment of bringing new facilities online. Although the cost of utilities is established differently for residential consumers versus industrial ones, individual habits can and do add up.

But only if we start thinking on a larger scale. That means thinking about how existing systems influence or curtail our actions and possibilities, rethinking how our current large-scale systems intersect with people, companies, and regulatory agencies, and considering new technologies, policies, and pricing structures that can produce system-wide change.

It also means opening new technological opportunities. What if my thermostat didn’t just tell me what I am saving but also what my neighbors are saving, helping me to support and reinforce their choices? What if we expanded the “carbon offset” idea to water-needy companies to help them consume what they needed while giving back elsewhere? What if several different companies that wish to manage certain resources could band together, across distances, accessing and visualizing the data they need to evaluate their changes?

Fortunately, many of our best and brightest are already on location in California. Silicon Valley has long professed its desire to “disrupt” the system and “change the world.” Here is their chance.

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME technology

Celebrity Nude Photo Hack Exposes Flaw In How We Think About Privacy and the Cloud

497594927
Daniel Sambraus—Getty Images

We assume our data stored in the cloud is personal, but in reality it is interpersonal

The celebrity photo hacking scandal has once again put personal information privacy in the spotlight. Apple certainly has egg on their face for their failure to implement a simple level of security on their iCloud service. But some commentators, bloggers, and Twitter users are pointing fingers at the victims of the hack for keeping such photos to begin with. Do we all need to be Internet security experts to use an iPhone?

Extreme measures are not the answer. Instead, our current system of data privacy is based on a fundamental flaw. We are all supposed to be solely responsible for our personal information, but at the same time we are all part of a social network of family, friends and services with whom we are expected to share.

Our systems are set up to make us entirely responsible for safeguarding our data. We toggle innumerable privacy settings that are constantly being updated. We navigate many different online services and platforms, each with their own complications. We are told to back up our data to hard disks and clouds, and exhorted to change our passwords regularly.

At the same time, we are expected to share, share, share. Parents post photos of their kids online so that distant grandparents can follow along. Young travellers start blogs to record their adventures for friends back home. Even nude selfies are rarely taken for the subject’s enjoyment but are shared with their loved ones far away. Indeed, hacked actress Mary Elizabeth Winstead described her leaked photos as ones that “I took with my husband.”

A wide variety of technological services – not just iCloud but also Dropbox and Google Drive – have grown around this requirement for social sharing. Social mores have kept up with the trend. Now, grandparents join Facebook with the expectation that they’ll see photos of their grandchildren. To resist posting is to be a bad parent.

When you fail at this delicate balance of privacy and sharing, friends and strangers alike will line up to tell you that it’s your fault. But how can you keep control over a video of your kids when you post it for your parents? How can you stop a friend at a party from syncing a photo of you to iCloud and uploading it to Instagram, or auto-tagging you using facial recognition on Facebook or Picasa? And when a photo that you text to your friend with an iPhone now belongs both to your pal and to Apple, who is to blame when the photo leaks?

Security experts will claim that the solution to the current hack is better password protection. But even two-stage security – systems that ask for a personal fact about you like a birthdate or pet name in addition to a password – fails to account for the fact that most information about us is readily available online. Children’s names, graduation dates and old addresses are only a search term away.

An alternative view is emerging from a study I’m conducting with colleagues at Princeton University and Yahoo! labs. We are learning that our data systems ask us to be individually responsible but fail to account for how and why we share data with each other. They assume our data is personal, when in reality it is interpersonal. We are caught between opting out entirely and managing an impossible number of changing services with finesse. We do all this with our most important relationships at stake.

Let’s call this what it is: data is only “personal” when it is leaked. That’s why blaming the victims doesn’t help. The next time this happens, it won’t be the celebrities who are hacked for their photos: it will be their friends, their parents, their boyfriends and girlfriends. Everyone is vulnerable.

Keeping each other’s data safe is everyone’s responsibility. It’s time that our devices and services live up to it.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy. She is a 2014 recipient of a Yahoo Faculty Research Engagement Program grant to study personal data privacy practices.

TIME technology

The Real Reason You Should Be Worried About That Facebook Experiment

Facebook Holds Its Fourth f8 Developer Conference
Justin Sullivan—Getty Images Facebook CEO Mark Zuckerberg shows off the new Timeline as he delivers a keynote address during the Facebook f8 conference on September 22, 2011 in San Francisco, California.

The private sector and tech companies are increasingly funding what was once independent social science research.

People are up in arms about the recent revelation that Facebook manipulated its users during a psychological study. The study, published in the Proceedings of the National Academy of Sciences and conducted by researchers at Cornell University and Facebook (full disclosure: I know the authors and the article’s editor), showed that people who saw more happy messages in their News Feed were more likely to post happy messages too. The researchers interpreted this as support for the theory of “emotional contagion”: that emotions can spread through online posts and interactions.

Unfortunately for the researchers, the explosive response to their study over social media confirms their findings. Negative emotion can easily go viral.

Why is this study is so controversial? Psychologists have known for years that individuals’ emotions can be influenced by their social surroundings. Sociologists have also shown that people act like their friends or other people around them in order to fit in. Just as no one wants to be a Debbie Downer at a party, posting sad stories online when your friends are posting happy ones seems to be a no-no. If anything, the findings add to a long list of Internet studies that argue against “digital dualism” — the notion that we behave differently online than we do offline — by showing that the online world plays an active role in shaping our social lives and experiences.

If the study’s findings are not controversial, its methods certainly are. Yet whether we like it or not, tech companies experiment with their users in precisely this way all the time. User-interface designers and researchers at places like Google, Facebook and Yahoo regularly tweak the live site’s interface for a subset of visitors to see whether users behave differently in response. While this technique shines new light on user behavior, the overall goal is to bring the company more revenue through more users, clicks or glances at ads. Stories of designers who made their companies millions more dollars in advertising revenue just by altering a single pixel on the homepage are legendary in Silicon Valley.

That’s why any tech company worth its salt has a research department staffed with Ph.D. scientists to analyze their data. That’s also why Facebook is actively hiring and reaching out to social scientists to help it better understand its data and reach new user populations.

Researchers, for their part, are increasingly joining forces with tech companies. There are many reasons to do so. From location check-ins to threaded conversations, from tweets in times of crisis to shared family photos, the reams of data present a fascinating slice of social life in the 21st century. These platforms also provide an unprecedented venue for a natural experiment at scale. With only a few tweaks, and without users’ knowing, researchers can witness which simple changes have tremendous effects.

As a sociologist of technology, I’ve witnessed these changes firsthand. I have grants from Microsoft and Yahoo; Intel funds my colleagues’ students; our graduates staff the labs at Facebook and Google. These collaborations aim to keep Internet research both current and practical.

But there are other reasons social scientists are turning to tech companies. Public money for social-science research is being slashed at the federal level. The congressional committee that oversees the National Science Foundation wants to cut $50 million to $100 million of social, behavioral and economics funding for the next two years (again, full disclosure: I have received NSF funding). A bill called FIRST: Frontiers in Innovation, Research, Science and Technology aims to improve American competitiveness by funding research that supports a U.S. industry advantage. Yet the committee has called specifically for the NSF to stop giving grants to study social media, online behavior or other Internet topics.

Ironically, at precisely the time when American technology companies are looking to social science to help understand their users and improve their business, this research is being denigrated in the House. And at exactly the time when independent research on Internet practices is needed, scholars must turn to companies for both data and funding.

This is a shortsighted move. On the one hand, it means we will train fewer social scientists to rigorously and responsibly answer the new questions posed by big data. But it also pushes basic research about online social life exclusively into the private sector. This leaves the same companies that make the technologies we use to talk, shop and socialize responsible for managing the ethics of online experimentation. No wonder that esoteric questions like informed consent are suddenly headline news.

The recently released study, then, does present reasons to be alarmed, though not for the reasons most of us think. Facebook isn’t manipulating its users any more than usual. But the proposed changes in social-science funding will have a more lasting effect on our lives both online and offline. That should inspire some emotions that are worth sharing.

Janet Vertesi is assistant professor of sociology at Princeton University, where she is a faculty fellow at the Center for Information Technology Policy

TIME technology

My Experiment Opting Out of Big Data Made Me Look Like a Criminal

The Latest Mobile Apps At The App World Multi-Platform Developer Show
Bloomberg/Getty Images The Facebook Inc. and Twitter Inc. company logos are seen on an advertising sign during the Apps World Multi-Platform Developer Show in London, U.K., on Wednesday, Oct. 23, 2013.

Here's what happened when I tried to hide my pregnancy from the Internet and marketing companies

This week, the President is expected to release a report on big data, the result of a 90-day study that brought together experts and the public to weigh in on the opportunities and pitfalls of the collection and use of personal information in government, academia and industry. Many people say that the solution to this discomfiting level of personal-data collection is simple: if you don’t like it, just opt out. But as my experience shows, it’s not as simple as that. And it may leave you feeling like a criminal.

It all started with a personal experiment to see if I could keep a secret from the bots, trackers, cookies and other data sniffers online that feed the databases that companies use for targeted advertising. As a sociologist of technology, I was launching a study of how people keep their personal information on the Internet, which led me to wonder: Could I go the entire nine months of my pregnancy without letting these companies know that I was expecting?

This is a difficult thing to do, given how hungry marketing companies are to identify pregnant women. Prospective mothers are busy making big purchases and new choices (which diapers? Which bottles?) that will become their patterns for the next several years. In the big-data era of targeted advertising, detection algorithms sniff out potentially pregnant clients based on their shopping and browsing patterns. It’s a lucrative business; according to a report in the Financial Times, identifying a single pregnant woman is worth as much as knowing the age, sex and location of up to 200 people. Some of these systems can even guess which trimester you’re in.

Avoiding this layer of data detectors isn’t a question of checking a box. Last year, many people were shocked by the story of the teenager in Minnesota whose local Target store knew she was expecting before her father did. Based on her in-store purchasing patterns tracked with credit cards and loyalty programs, Target started sending her ads for diapers and baby supplies, effectively outing her to her family. Like the girl in the Target store, I knew that similar systems would infer my status based on my actions. So keeping my secret required new habits, both online and off.

Social media is one of the most pervasive data-collection platforms, so it was obvious that I couldn’t say anything on Facebook or Twitter, or click on baby-related link bait. But social interactions online are not just about what you say but also what others say about you. One tagged photo with a visible bump and the cascade of “Congratulations!” would let the cat out of the bag. So when we phoned our friends and families to tell them the good news, we told them about our experiment, requesting that they not put anything about the pregnancy online.

Social media isn’t the only offender. Many websites and companies, especially baby-related ones, follow you around the Internet. So I downloaded Tor, a private browser that routes your traffic through foreign servers. While it has a reputation for facilitating illicit activities, I used it to visit BabyCenter.com and to look up possible names. And when it came to shopping, I did all my purchasing—from prenatal vitamins to baby gear and maternity wear—in cash. No matter how good the deal, I turned down loyalty-card swipes. I even set up an Amazon.com account tied to an email address hosted on a personal server, delivering to a locker, and paid with gift cards purchased with cash.

It’s been an inconvenient nine months, but the experiment has exposed harsh realities behind the opt-out myth. For example, seven months in, my uncle sent me a Facebook message congratulating me on my pregnancy. My response was downright rude: I deleted the thread and unfriended him immediately. When I emailed to ask why he did it, he explained, “I didn’t put it on your wall.” Another family member who reached out on Facebook chat a few weeks later exclaimed, “I didn’t know that a private message wasn’t private!”

This sleight of hand is intentional. Internet companies hope that users will not only accept the trade-off between “free” services and private information but will also forget that there is a trade-off in the first place. Once those companies have that personal data, users don’t have any control over where it goes or who might have access to it in the future. And unlike the early days of the Internet, in which digital interactions were ephemeral, today’s Internet services have considerable economic incentives to track and remember—indefinitely.

Attempting to opt out forced me into increasingly awkward interactions with my family and friends. But, as I discovered when I tried to buy a stroller, opting out is not only antisocial, but it can appear criminal.

For months I had joked to my family that I was probably on a watch list for my excessive use of Tor and cash withdrawals. But then my husband headed to our local corner store to buy enough gift cards to afford a stroller listed on Amazon. There, a warning sign behind the cashier informed him that the store “reserves the right to limit the daily amount of prepaid card purchases and has an obligation to report excessive transactions to the authorities.”

It was no joke that taken together, the things I had to do to evade marketing detection looked suspiciously like illicit activities. All I was trying to do was to fight for the right for a transaction to be just a transaction, not an excuse for a thousand little trackers to follow me around. But avoiding the big-data dragnet meant that I not only looked like a rude family member or an inconsiderate friend, but I also looked like a bad citizen.

The myth that users will “vote with their feet” is simply wrong if opting out comes at such a high price. With social, financial and even potentially legal repercussions involved, the barriers for exit are high. This leaves users and consumers with no real choice nor a voice to express our concerns.

No one should have to act like a criminal just to have some privacy from marketers and tech giants. But the data-driven path we are currently on—paved with the heartwarming rhetoric of openness, sharing and connectivity—actually undermines civic values and circumvents checks and balances. The President’s report can’t come soon enough. When it comes to our personal data, we need better choices than either “leave if you don’t like it” or no choice at all. It’s time for a frank public discussion about how to make personal-information privacy not just a series of check boxes but a basic human right, both online and off.

Your browser is out of date. Please update your browser at http://update.microsoft.com