• Ideas
  • Books

We Only Think We’re Making Our Own Choices. It Matters How Options Are Framed

12 minute read
Ideas
Johnson is the Norman Eig Professor of Business and the director of the Center for Decision Sciences at Columbia Business School. He has been the president of both the Society for Judgment and Decision Making and the Society for Neuroeconomics. He lives in New York City.

You might not know it, but when you gave your partner some choices for dinner after work this evening, you were a choice architect. You almost automatically thought about how may options to present, you presented them in a certain order, and you might have described them as heavy or light, meaty or vegetarian, or scrumptious or healthy. You might not have known this, but your presentation probably influenced their response.

You are not alone: every store, app, financial advisor, doctor, and parent practices choice architecture, and even though we all do it without a license, it matters. It affects everything from what kind of eggs we buy to where we choose to live. In fact, it touches on just about every decision we make.

Choice architecture refers to the many aspects of how a choice is posed that can be manipulated, intentionally or inadvertently, to influence the decisions we make. The options may be the same, but the presentation can change your choice. Before you make a decision, someone has molded many of the characteristics of that choice for you, and these design decisions will in some way affect what you choose.

Many people, when first introduced to the concept, are uncomfortable with or even afraid of it. They are afraid their choices might be influenced by something outside their control, without their awareness, and that they might be exploited. As designers, they worry about influencing others, unintentionally or in harmful ways.

Choice architecture can seem threatening: it gives the designer some control over what we choose. By setting a default, the car dealership is affecting what trim package you choose for your new vehicle; by sorting wine on their site, the online wine merchant may be making cheap, low‑quality wine seem more attractive.

If people are unaware of the influence of choice architecture, maybe we can just tell them that they are going to be influenced. Warnings accompany all sorts of products, from vacuum cleaners to cigarettes, so why not choice architecture?

Unfortunately, disclosing the presence and intent of choice architecture does not seem to work. Several studies have told people about what defaults do, in various ways, including saying that their goal is to change behavior. All that warning appears to do is to make the nudge seem more acceptable.

This is why neglecting choice architecture as a designer can lead to harm. Designers don’t always know the power of the tools that are at their disposal. They might be harming choosers unintentionally, making haphazard selections of tools.

Consider end‑of‑life decisions. When people are gravely ill, they can choose interventions that could extend their lives, but these therapies are intrusive and unpleasant, and the time added to your life often comes with the price of being put on a ventilator or having a feeding tube inserted. Grouped together, these treatments are called life-extension care. The alternative is called comfort care. The latter involves declining many invasive interventions and focusing on managing pain and ensuring comfort.

In a remarkable study by Scott Halpern and colleagues, patients with terminal illnesses made choices about end‑of‑life care that actually determined their treatment. The advance directive given to them by the researchers first asked what their goal for care was: life extension or comfort care. For a third of the patients, the directive had a comfort‑care default, indicated by a pre-checked box. A second group had no preselection, and the third group had the life‑extension goal already checked. Comfort care was selected 77 percent of the time when it was the default, 66 percent of them time when no option was preselected, and only 43 percent of the time when life extension was the default. This means that the default affected specific interventions, like having a feeding tube. It is remarkable that the default had such a large effect on such an important decision. But what happened next is even more informative.

Being ethical scientists, the investigators later explained to all the patients (at least those that were still living) that they had been randomly assigned to those defaults, told them about the influence of default effects, and most important, offered the respondents the chance to change their minds. If patients had preferences, this was their chance to express them. Yet, of the 132 terminal respondents, only 2 changed their choices. Even when they were told what the defaults were, that the defaults had been determined randomly, and the way defaults influence choice, the effect of defaults persisted.

This is strong evidence that many people do not have preexisting preferences for end-of-life care. This is an incredibly difficult and unfamiliar decision. Most patients have not experienced intubation, the insertion of a feeding tube, or dialysis before, and these aren’t decisions anyone enjoys thinking about in advance. When the time comes, the primary decision‑maker may not be conscious, and the family members who inherit the decisions are overwhelmed.

Making sure that people have a choice is laudable, but overwhelmed decision makers are even more likely to take the default. This is a problem: there is a disconnect between what people say they want when they are forced to make a choice and what happens when they are not forced. Comfort care was selected by the majority of patients in the Halpern‑led study when there was no default. But if you don’t make a choice, that is not what happens in reality. Unless the patient or their immediate family says otherwise, the patient will be treated as if they had chosen life extension. The design of most of the commonly used advance directives seems to bias people toward life extension. For example, “I want to have life support” is the first option on one commonly used document.

Assuming patient autonomy and ignoring the influence of choice architecture has a significant impact on suffering, cost, and dignity. Doctors may be loath to influence end‑of-life care choices, but then patients also don’t want to make these decisions. This reluctance increases the importance of defaults.

End‑of‑life choice illustrates when choice architecture might have its largest effects. Most choices are mundane and repetitive, but some choices are both important and rare. When people make them, they often lack clear ideas of what they want or how to proceed. Choosing a school, buying a house, selecting a pension plan, and settling on a type of end‑of life care are all examples of infrequent decisions with big consequences. Particularly if the decision‑maker has conflicting goals, choice architecture will play a larger role.

Not all bad choice architecture results from ignorance or naïveté. Some designers experiment to see what works—for example, in direct mail campaigns or by conducting A/B tests on the internet. These designers could use this knowledge to advance their interests instead of those of the choosers. The result may be malevolent choice architecture.

Choosers are very sensitive to initial costs, in terms of both money and effort. Badly intentioned designers can exploit that. We have all made the decision to start or stop a subscription service—say, a newspaper or a streaming service like Spotify or Hulu. This can be used to construct a subscription trap, where the designer has made it is easy to start and hard to stop. Newspapers are a mild example. It is very easy to start a subscription with a few clicks on most newspapers’ websites for a low initial rate, like $1 a week for fifty-two weeks. But once you have started, it is more difficult to stop—say, when the early rate increases to almost $5 a week. To cancel, you must call an 800 number.

Designers can use costs to inhibit choices and maintain the status quo. Several years ago, I was interviewed by National Public Radio’s Marketplace. The interviewer and I sat together as he tried to change his privacy settings. By default, Verizon could track his phone calls and potentially sell that information. A Verizon representative had said opting out of tracking was very easy. The reality was different: after a long robotic message that suggested that the interviewer could “restrict or change options to his telecommunications service information,” he was given a long menu of options. After pressing 1, indicating that he wanted to change his privacy options, he was asked whether he wanted to place a restriction on his account. This seemed scary, like he would be giving up something rather than just changing his privacy settings. He was then asked to type in his ten‑digit telephone number as it appears on his bill, followed by pound. The robotic voice read it back, very slowly, digit by digit, and then asked him to enter it again. It then asked for the first thirteen digits of the account number from his bill, then asked him to speak his first and last name, remembering to press pound each time, speak his address, then his town, state, and zip code, and finally his first and last name again to confirm that he is the decision‑maker. I suspect the phone company already knew his phone number. Not surprisingly, Verizon reported that the number of people opting out was in the single digits.

Of course, many privacy agreements have exactly this structure: we are presented with long and complex terms of service that seem designed to restrict comprehension. It’s estimated that over 90 percent of website users do not read terms‑of‑service documentation. This can lead to bad decisions. In one study, 98 percent of users agreed to a privacy policy that explicitly said it would share all information with the National Security Agency and their employer, and required them to provide their first‑born child as payment. Fortunately, this was an experiment, but bad choice architecture does lead to mistakes in understanding what we are giving up.

Perhaps the most egregious example of malicious choice architecture involves electronic health records (EHRs). That is the computer system doctors use to keep track of patients and to prescribe drugs.

Smaller practices and individual doctors don’t have the resources to develop and tune EHRs of their own. Many adopted a free system, provided by a successful start‑up called Practice Fusion. Heralded as “the Facebook of health” by TechCrunch, the company provided the EHRs funded by selling advertising targeted to the physicians.

But that is not all. Practice Fusion also received payments from pharmaceutical companies in return for changes in their EHR’s choice architecture. One particularly nefarious example was an agreement between Practice Fusion and a company known in court as “Pharma X.” In exchange for $1 million, in 2016 Practice Fusion added an alert that reminded physicians to ask patients about their pain and then provided options. The alert was presented to doctors 230 million times in a three‑year period, and Pharma X estimated that the alert would add three thousand customers and as much as $11 million in sales. This was happening at the same time that concern was rising about overprescribing of pain medicines, specifically extended‑release opioids. The problem is that the prompts ignored the Centers for Disease Control and Prevention’s guidelines for opiate prescriptions encouraging non-pharmaceutical and non-opioid treatments. If the doctor thought opioids were required, they were advised to avoid time‑release drugs, since they were more likely to lead to long‑term use, and to limit the number of pills to a small supply. But Practice Fusion’s EHR system included an option for extended‑use opioids, even where the guidelines warned against them.

Pharma X was Purdue Pharma, the maker of OxyContin, which in 2021 settled a suit for misleading marketing of opioids with fines and payments estimated to be $4.5 billion. Meanwhile, Practice Fusion admitted to the payments it had received to change the EHR choice architecture, settling the charges brought by the state of Vermont for $145 million. Practice Fusion and Purdue were the designers of a choice architecture that caused harm to patients by providing inappropriate options to doctors.

Choice architecture can have an enormous impact on people’s welfare. It can make it harder or easier to control our personal information. It can increase savings for retirement and help students find better schools. It can increase or decrease prescriptions for potentially addictive drugs. Choice design can make a difference, and ignoring it is not an option.

This is particularly true when we look at who is most affected by choice architecture. It has a greater impact, positive or negative, on the people who are the most vulnerable: those with lower incomes, less education, and challenging social circumstances. Put another way, choice architecture could be a particularly potent tool for addressing income disparity and social justice. On the flip side, this means that malevolent choice architecture, like the examples we have just discussed, are particularly harmful to those who are the most disadvantaged.

While a deeper understanding of how choice architecture works may tempt some to manipulate others for their own ends, I hope they will be very much in the minority. Overall, a more widespread understanding of how our design choices affect others should result in more intentional and constructive choice architectures from which we can all benefit.

Designing choice architecture is like picking a path on a map. Many paths are possible, but some are much better for the chooser. Defaults can be selected in choosers’ best interests. Good alternatives can be made easy to see, and not obscured by many bad or irrelevant options. Benefit programs can be made easy to access instead of more difficult. And when we do know what we want, a good choice architecture can make it either easy to find or difficult. But how you use your newfound skill is, of course, up to you.

Adapted from THE ELEMENTS OF CHOICE by Eric J. Johnson, published by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House, LLC. Copyright © 2021 by Eric J. Johnson.

More Must-Reads From TIME

Contact us at letters@time.com

TIME Ideas hosts the world's leading voices, providing commentary on events in news, society, and culture. We welcome outside contributions. Opinions expressed do not necessarily reflect the views of TIME editors.