• U.S.

Nation: Where the Polls Went Wrong

10 minute read
John F. Stacks

Reagan’s landslide challenges the pulse-taker profession

For weeks before the presidential election, the gurus of public opinion polling were nearly unanimous in their findings. In survey after survey, they agreed that the coming choice between President Jimmy Carter and Challenger Ronald Reagan was “too close to call.” A few points at most, they said, separated the two major contenders.

But when the votes were counted, the former California Governor had defeated Carter by a margin of 51% to 41% in the popular vote—a rout for a U.S. presidential race. In the electoral college, the Reagan victory was a 10-to-l avalanche that left the President holding only six states and the District of Columbia.

After being so right for so long about presidential elections—the pollsters’ findings had closely agreed with the voting results for most of the past 30 years—how could the surveys have been so wrong? The question is far more than technical. The spreading use of polls by the press and television has an important, if unmeasurable, effect on how voters perceive the candidates and the campaign, creating a kind of synergistic effect: the more a candidate rises in the polls, the more voters seem to take him seriously.

With such responsibilities thrust on them, the pollsters have a lot to answer for, and they know it. Their problems with the Carter-Reagan race have touched off the most skeptical examination of public opinion polling since 1948, when the surveyers made Thomas Dewey a sure winner over Harry Truman. In response, the experts have been explaining, qualifying, clarifying—and rationalizing. Simultaneously, they are privately embroiled in as much backbiting, mudslinging and mutual criticism as the tight-knit little profession has ever known. The public and private pollsters are criticizing their competition’s judgment, methodology, reliability and even honesty.

At the heart of the controversy is the fact that no published survey detected the Reagan landslide before it actually happened. Three weeks before the election, for example, TIME’S polling firm, Yankelovich, Skelly and White, produced a survey of 1,632 registered voters showing the race almost dead even, as did a private survey by Caddell. Two weeks later, a survey by CBS News and the New York Times showed about the same situation.

Some pollsters at that time, however, were getting results that showed a slight Reagan lead. ABC News-Harris surveys, for example, consistently gave Reagan a lead of a few points until the climactic last week of October.

The single exception to these general findings was the judgment drawn by the Reagan campaign’s own elaborate poll ing operation, run by Richard Wirthlin, who claims that Reagan had a consistent five-to seven-point lead throughout the last two weeks of the campaign.

Carter’s pollster, Patrick Caddell, on the other hand, still stands by his figures, which reflected a close race right up until the weekend before the election. On the Saturday before the election, four days after he had come off second best in the debate with Reagan, Carter was about even with Reagan, insists Caddell. But by Sunday night, he says, Carter’s campaign had collapsed. Caddell’s reason: the hostage issue was again in the news and again unsettled, thus reviving the public’s frustration with Carter as a whole. Caddell’s data shows Carter suddenly dropping five points behind by Sunday night, with another five-point collapse by Monday night.

The public opinion industry has christened Caddell’s thesis the “big bang” theory of the campaign: 8 million voters moving to Reagan in 48 hours. To a large extent, most public opinion researchers support this theory, although many do so with major qualifications.

Says TIME’S pollster Daniel Yankelovich: “There is every reason to assume that is what happened. When people are conflicted, they procrastinate. And that’s what they did in this election.”

Warren Mitofsky, director in charge of the polling effort run by CBS News and shared by the New York Times, has produced a new opinion survey that seems to substantiate the big bang theory. Re-interviewing 2,651 adults who had been questioned before the election, Mitofsky found that some 13% of the voters changed their minds in the last few days of the campaign and that Reagan got the lion’s share of the switchers. Says Mitofsky: “Caddell’s thesis is consistent with what CBS found.”

The Harris organization, which polled throughout the weekend and on Monday, showed Reagan gaining points right up to Election Day. By Monday night, according to Harris Executive Vice President David Neft, an unpublished Harris survey had Reagan six points ahead of Carter. Others picked up the trend too, and Wirthlin showed a widening gap through the weekend until Monday night when he, like Caddell, pegged the margin at about ten points in Reagan’s favor. The Gallup survey, which eleven days before the election had Carter ahead by three points, found Reagan moving from 42% to 44% to 47% in its final survey, taken on Nov. 1.

But although there is agreement on the fact that the gap widened at the end, no one except Caddell and Wirthlin came close to calling the margin. The Harris organization, which is claiming great credit for doing better than other public polls, was four points off Reagan’s actual voting percentage, the largest error factor it has ever had in a presidential election. Gallup not only also missed the winner’s voting percentage by four points but further erred by saying that Reagan was ahead by a margin of only three points. The margin was, says George Gallup, “a deviation greater than the average deviation of 2.3 percentage points for the 23 national elections covered by the Gallup poll.”

Everyone agrees that to some extent the Reagan margin over Carter grew in the last few days before the election. But they disagree over how much, when and why. Indeed, reading from the same computer printouts, CBS News and the New

York Times disagree over how much impact the hostage crisis had; CBS News says not much, while the New York Times analysis says it “was a major element.”

Looking for explanations of what went wrong, Wirthlin believes that the other pollsters erred by estimating that there would be more Democrats in the final body of voters than there turned out to be. He also criticizes the others for asking the key presidential-choice question first instead of last, after asking about issues and impres sions of the candidates. This, he insists, produced a pro-Carter bias.

Mitofsky disagrees strenuously with the criticisms. Says he: “I can’t buy their approach to making es timates from data. I’m not prepared to throw out our techniques just because one poll produced a different number. In fact, if we were doing this all again, I would not change a single thing except to poll the last two days of the campaign. To believe their figures, too many other people have to be wrong.”

But Neft at Harris thinks Mitofsky’s post-election poll was wrong and was designed to explain away earlier numbers. Neft dismisses the notion that huge changes occurred at the last minute. Says he: “Nothing like that quantity and magnitude happened.” He explains the Harris four-point discrepancy by citing unex pectedly low turnout among Democrats on Election Day, a view shared by Gallup.

Two basic conclusions jump out of the unhappy experiences of the pollsters. First, most of the private surveyors stopped work too early to pick up the last-minute switches, whether the change was enormous, as most now believe, or whether, in Wirthlin’s phrase, “the mountain didn’t jump— it slid a little.” The reason that most private firms did not survey intensively right up until the last moment is simple: it would have cost too much.

The price of interviewing a single voter and then adding the data to the calculations is about $15. A major national survey usually contacts at least 1,500 people, running up a bill of about $22,500.

As it happened, only the candidates themselves were prepared to spend that kind of money time and again. Harris, for example, spent $350,000 on presidential polling from Labor Day on, whereas Caddell ran up bills of some $2 million. Wirthlin’s operation spent $1.3 million and surveyed 500 people every night of the fall campaign until the last few days, when it contacted 1,000 nightly. The findings were then calculated on a rolling, three-day average, which Wirthlin contends evened out the peaks and valleys that other pollsters perceived with their single-shot surveys. Wirthlin is frank enough to admit that he had a great advantage over the public pollsters. Says he: “Their major problem was the lack of resources and lack of continuity.”

In mid-October, the discrepancies between Wirthlin’s findings and those of the published surveys created a near panic in the Reagan camp. Under pressure from their colleagues, Wirthlin and his assistants spent a frantic three days reviewing their numbers and techniques. They decided they were right, but Caddell, for one, still believes that they had Reagan too far ahead too early.

The other lesson of the polling season was that the experts have by no means perfected the questions or the techniques that enable them to predict how undecided or unhappy voters will go on Election Day.

One puzzling phenomenon that the pollsters have not been able to cope with, or even explain thoroughly, is the so-called closet Reaganite. For whatever reason, people clearly voted for Reagan in this election who had said they would not.

Everett Ladd, director of the University of Connecticut’s Social Science Data Center, says flatly: “I am 100% certain that there was no ‘closet Reaganism’ in this election.” Other pollsters tend to agree. But there is some evidence that suggests otherwise. Before the election, only 7% of the blacks surveyed by New York Times-CBS News said they were going to vote for Reagan; Election Day exit polling showed that 14% had ac tually cast their ballots for the Californian. But when re-polled by New York Times-CBS News, only 6% of blacks admitted they had voted for Reagan.

If the pollsters are united on one point, it is that they are not solely to blame for misleading the public; the fault must be shared with the press, they say, which has never fully understood the limitations of surveying.

Says Cuff Zukin, poll director of the Eagleton Institute of Politics: “We are overconsumed with predicting what will happen. Polls predicting who is going to win the election are worthless. First, they can be very inaccurate at the time of the election be cause they are only accurate at the time they are taken.

They do not predict the future.” Agrees Marquette University Sociologist Wayne Youngquist: “The media want the pollsters to be seers. We want them to do more than they can.”

Negative voting, large numbers of undecideds, low turnout — all these factors made polling this year more difficult. Says Caddell: “This is the first election in which the voters didn’t really like either candidate much.”

Says Ladd: “We need a different methodology of election polling that takes into account the vastly greater flexibility that in the long-term sense characterizes the electorate. We know something breaking at the last minute — and it doesn’t have to be something very big— can change results. We shouldn’t pay too much attention to the earlier polls.”

Yankelovich points out that polls can produce numbers reflecting very firmly held, nearly unchangeable opinions, and can at the same time record views that are “mushy.” Along with TIME, he is at work on a new technique that will show which figures are “hard” and which are “soft.”

Admits Yankelovich: “Our greatest fail ure was to not point out more clearly that the implications of our data were that great movement could occur.”

In the end, as Yankelovich suggests, the main fault of the pollsters in a volatile year was that they did not view their own findings with enough skepticism — and drive the point home much more forcefully.

More Must-Reads from TIME

Contact us at letters@time.com