Why Did Pollsters Get So Many Races Wrong?

4 minute read

Election Night wasn’t just bad for Democrats. It was also bad for pollsters.

Consider the following: Arkansas Sen. Mark Pryor lost in an unexpected blowout. Virginia Sen. Mark Warner, who was widely expected to cruise to victory, is currently ahead by just 12,000 votes. Iowa Senator-elect Joni Ernst, predicted to win narrowly, won by over eight points. Georgia Senator-elect David Perdue, expected to go to a runoff, won outright. Aggregate polling data predicted North Carolina Sen. Kay Hagan and Kansas Independent Greg Orman would win by the skin of their teeth, but both lost.

And, in perhaps the worst missed call, Maryland Governor-elect won by nine points when one recent poll had shown him losing by 13.

How did so many predictions go wrong? For one thing, more Republicans turned out than people expected.

Dr. Sam Wang of the Princeton Election Consortium, and Mark Blumenthal, senior polling editor at the Huffington Post, agreed that Republicans outperformed polls both in Senate races and gubernatorial races. Overall, Republicans outperformed their reelection polls by five points in Senate races and about two percent in gubernatorial ones, according to Wang.

“I think a lot of the election polls had the likely electorate models wrong, one way or another,” says Blumenthal. “I would guess that there was probably too many Democrats—that they had people who turned out not to vote in the sample who were disproportionately Democratic leaning.”

There were a few states in particular that shocked Wang and Blumenthal.

“Virginia was obviously a huge surprise last night,” said Wang. “I was watching data come in and at first I thought it was some kind data error because it just didn’t look right—it looked like it was 10 points off.”

“Whether it’s older voters or white voters, but whatever the case, I think the demographic of people who voted was evidently pretty different from the demographic of people who were surveyed,” he added. “I would say that Republican relative over performances were so large that there has to have been something like a collective misjudgment of who likely voters would be.”

“In Virginia and Maryland—we weren’t watching closely enough,” agreed Blumenthal.

The pollsters tempered their critiques of election models, noting that many of the races polled accurately predicted who would win, if not by how much. Some polls, like those tracking the New Hampshire Senate race were “right on the button,” says Wang, and there were only two Senate races that pollsters might have gotten “wrong”—North Carolina and Kansas—but that’s “par for the course” in midterm elections.

The modern problems with polling data—including the cultural and technological shift from landline phones to cell phones making it increasingly difficult to target younger, urban voters—may not have had that much of an impact this time around, says Wang.

“People talk about those deficiencies but those probably were not the cause of this because most of those problems are problems that tend to miss Democratic voters,” says Wang. “If anything these polls obviously underestimated Republican turnout.”

But Blumenthal cautions declaring certain polls with higher GOP turnout as kingmakers, saying that the best polling evaluations still come from voter lists that match respondents with their voting record.

“The cheap, flawed methodologies that are out there—the robopolls that make no effort to compensate for the cell-only population—those are going to get more Republicans and some of those were more ‘accurate’ in the last week, in the last month or two than other methods,” says Blumenthal. “If we all want to figure this out and if we want to do better in polling in the future, those voter lists methods offer us far more tools to diagnose what happened and to chart a better course.”

“I think we’re going to end up drawing the wrong lesson if we just look at who came closest to getting the result right this time,” he added.

More Must-Reads From TIME

Contact us at letters@time.com