What’s wrong with polling? Less than you think.

Jim Eltringham
4 min readDec 7, 2020

Political polls tell us a story — if we understand how to read them.

UNIVAC tried to warn us about how unreliable poll coverage can be. (Photo by Daderot via Wikimedia Commons, https://commons.wikimedia.org/w/index.php?curid=8894394)

The numbers didn’t match expectations.

It was Election Night in America. Programmers had carefully loaded their formulas and algorithms into a computer to analyze early election returns, comparing the evening’s fresh data to data from past elections to predict the outcome. The problem: The program predicted an outcome that seemed unrealistic. So programmers went back and tweaked their models until the computer gave the answer everyone expected.

Students of computers, polling, or both may recognize this scenario that unfolded in November 1952. CBS News tried to spice up one of the first televised election nights by allowing a UNIVAC computer to predict the results based on early returns. UNIVAC, a room-sized behemoth of early electronics, predicted a landslide for General Dwight D. Eisenhower in a race many expected to be close. So CBS spiked UNIVAC’s first projection, and the programmers tinkered with their algorithms until the computer gave Eisenhower a narrower victory.

But UNIVAC was right the first time. The “electronic brain” (as CBS anchor Walter Cronkite repeatedly called it) originally called for Eisenhower to win with a 438–93 electoral vote tally over Illinois Governor Adlai Stevenson handily. The actual result favored Eisenhower 442–89. Later on in the evening, a representative of Remington Rand — the company behind UNIVAC — explained that the second prediction came because humans did not trust the data in front of them, and they changing their mathematical inputs until the output matched expectations.

This story becomes particularly relevant now, nearly 70 years after Eisenhower beat Stevenson and a month after former Vice President Joe Biden unseated incumbent President Donald Trump. Polls leading up to the 2020 election hinted at a comfortable Biden win; the actual results were razor-thin enough that the election was not called by news organizations until the Saturday after Election Day.

In the aftermath, coverage of the Trump team’s legal challenges to those minuscule margins has occasionally given way to pundits bemoaning the failures of pollsters. It’s a familiar refrain from four years ago, when Trump won the Presidency in an upset over Hillary Clinton, again appearing to outperform polls. Observers throw out a variety of criticisms, claiming that pollsters rely too heavily on landline telephone interviews or that so-called “shy Trump voters” won’t admit who they intend to support.

Much of this criticism is valid, but it evades the major issue: The biggest problem with polls is how we read them. Or, perhaps more accurately, how we read into them. As with CBS’s 1952 UNIVAC experiment, polling’s biggest challenge remains that the process begins and ends with human assumptions.

Biases and assumptions can show up in various aspects of survey design (including how questions are written or what topics are covered), but the most obvious assumptions are made when deciding what a survey sample should look like. To oversimplify a bit, pollsters try to interview a group of people with the same demographic and philosophical proportions as the people expected to cast votes in the next election. That’s where it gets tricky. After all, how can you know for sure what the electorate is going to look like? Unless you’re psychic (and if you are, why are you polling anyway?), you have to rely on assumptions. You look at previous elections and make your best guess.

That’s fine — it’s how it’s done, after all. But it’s still a guess.

Smart campaigns embrace the variables that cause this uncertainty to help direct strategy. Internal polls tell a savvy campaign manager how to organize turnout efforts or determine which audiences to target with digital ads. Done right, campaigning can change polls — sometimes by changing voters’ minds, but more often by turning out a more favorable electorate for their candidate.

The idea that polling data can present various paths and possibilities is different from the way news outlets treat poll results. As with much political reporting, the media suffers from a lack of expertise on this front. Too frequently, reporting regresses to a simple, “horse race” narrative, with a news or news entertainment show presenting poll numbers the way a sports show would treat a scoreboard. One side is described as being definitively ahead, the other definitively behind.

Instead, political analysts and reporters should think about polls more like a weather forecast. Television meteorologists will tell you what they expect the weather to be like over the next week based on current conditions and how similar conditions have behaved in the past. But they also remind you that the outcome could be different from the predictions — either because the conditions change, or because of incorrect assumptions of how conditions would behave.

Without understanding the underlying assumptions behind polls (and the way those assumptions could change), media organizations present an inadvertently incomplete picture of what’s actually happening in the electorate. It isn’t enough to know what the topline numbers say; much more important is understanding why the numbers say what they do.

CBS and other news organizations should have learned how assumptions can shade reporting with the UNIVAC stunt in 1952. The aftermath of 2020 suggests they still haven’t figured that lesson out yet, so it appears future elections will leave in their wake a cluster of articles, podcasts, and blog posts breathlessly asking, “What’s wrong with polling?”

At least, that’s the way it looks based on past behavior. But then again, the model could always be wrong.

--

--

Jim Eltringham

Advocacy, message, and grassroots mobilization consultant