Skip to main content
Faculty Viewpoints

Don’t Expect Pollsters to Break Their Losing Streak

Polls predicted a “red wave,” but Democrats held the Senate and fought to a near-draw in the House. Yale SOM’s Jeffrey Sonnenfeld and Steven Tian write that after a series of polling misses, it’s time to acknowledge the fundamental flaws in pollsters’ approach.

Voters at a polling station at the Brooklyn Museum on November 8.

Voters at a polling station at the Brooklyn Museum on November 8.

Yuki Iwamura/AFP via Getty Images
  • Jeffrey A. Sonnenfeld
    Senior Associate Dean for Leadership Studies & Lester Crown Professor in the Practice of Management
  • Steven Tian
    Director of Research, Chief Executive Leadership Institute

This commentary originally appeared in Fortune.

America’s pollsters are in denial. With Democratic control of the Senate confirmed after the AP’s call this weekend of a Democratic win in Nevada, obviously the much vaunted “red wave” predicted by the pollsters failed to materialize–yet the pollsters are rushing to spin fact-free revisionist narratives asserting otherwise.

Quantitative statistics and data can often present ambiguous situations with a veneer of objective, unimpeachable fact–which makes it even more disappointing when statistical integrity is twisted or misunderstood.

For the past nine months, we have worked assiduously to correct the false numerical narratives of Putin’s propaganda on everything ranging from dubious Russian national income statistics to the number of companies that have actually pulled out of Russia to the supposed resilience of the Russian economy.

Unfortunately, closer to home, many media commentators regard the election forecasts put out by the domestic political polling industry as the product of highly sophisticated data analysis, providing breathless horse-race coverage based on who is up and who is down in the most recent poll, when in reality their practices often veer more towards unsupported assumptions and sophistry.

Great expert resources such as the National Opinion Research Center, Pew, and Edelman have better methods, larger samples, and avoid daily headline-driven overnight readings. Some such as the Harris Poll and Morning Consult are rather nuanced and accurate. However, media pundits and forecasters jam weaker outlets and partisan pollsters with reputable institutions together in their analysis.

The GOP-funded Trafalgar Group, as Slate showed, not just heavily failed in their overall calls but wrongly pronounced swings to the GOP among millennials and Hispanics when the opposite happened.

Two years ago, the New York Times warned that “Trafalgar does not disclose its methods, and is considered far too shadowy by other pollsters to be taken seriously.” Undeterred, however, polling aggregator Nate Silver’s site rated them an A-.

During an interview last month, the Trafalgar Group’s founder Robert Cahaly said that he “wants to be right more than anything” and to be “the Elon Musk of polling.”

Most pundits and pollsters got it wrong in 2018, 2020, and 2022–not because their artificial intelligence systems failed, but because none of us can learn if we cut off true facts and hide in a haze of denial. For example, Nate Cohn from the NYT argued: “I’m surprised by the amount of griping about the polling that I’m seeing. The polls did pretty well! The ‘traditional’ polls did *really* well. Doesn’t get much better.”

Perhaps these pollsters should take a closer look at their own polls. Take the Senate side alone:

The misses were even more egregious when it came the House and Governors races. As one example of many, the average poll in the Arizona gubernatorial race in the week before election day had Kari Lake winning by 2.4%, with not a single major poll calling a Katie Hobbs victory.

Beyond any individual race, polls seriously misread the mood of the country and the salient issues on voters’ minds. Pre-election polls largely found that voters were apathetic to the issue of democracy and receptive to voting for election deniers, with pundits lambasting President Biden’s pre-election speeches on democracy accordingly.

Evidently the pollsters were wrong. Many of the most vocal election deniers were soundly defeated–ranging from Mark Finchem in Arizona to Jim Marchant in Nevada to Tim Michel in Wisconsin to Kristina Karamo and Tudor Dixon in Michigan to Doug Mastriano in Pennsylvania–even though the first four were generally leading in pre-election polls.

Of course, election surprises come with the territory–and nobody really knows what is going to happen until all the votes are tallied up. But increasingly egregious polling misses year after year call for increased scrutiny into the shortcomings of modern polling sources and methods, as well as a better and more realistic understanding of what pollsters can and can’t know.

Assuming turnout is pseudoscience

Pollsters can only extrapolate the turnout rates of previous years. The last couple of election cycles have seen record turnout across both sides of the aisle, especially with younger voters, lessening the value of already-displaced historical precedents.

Hard prior is elusive

What you hear depends on who you ask. Some polls sample most likely voters, while others survey all registered voters or even all citizens–which can produce vastly different results. Pollsters only have rough exit poll data to work with across demographic breakdowns–and not official data by age, gender, religion, race/ethnicity, marital status, household size, income, employment, education, party, and ideology. With extremely low clarity to begin with, pollsters have to make unilateral assumptions around these crucial demographic weightings much more than they’d like to admit.

Voter response bias

The sheer number of pollsters–which has exploded over the last 20 years–creates voter fatigue, tedium, and less willingness to respond for privacy and social desirability reasons.

Pollsters are highly aware that some types of voters are more likely to respond than others– having learned from the 1936 Alf Landon mis-call and the mistakes of the Dewey-Truman era– and thus use a propensity score to adjust for respondents’ propensity to be online. This too calls for unilateral assumptions without any grounding in actual voting data. Even the smallest tweaks in these base assumptions and filtering algorithms would significantly alter tenor of the polling results.

Sampling methods

Pew has documented that telephone response rates have fallen below 9% which is not considered close to valid measurement in any social science fields. Online surveying can be more problematic as there there is no national list of email addresses from which people could be sampled. Thus there is no systematic way to collect a traditional probability sample of the general population relying the internet.

Sample size

With the exception of Edelman, the response sample sizes are often far too small with most polls surveying less than 1,000 people–sometimes only a few hundred. Making things worse is the narrow overspecification asking for more than what the data can give. A sub-category with seven respondents gives nothing but noise.

Wording bias

Poorly phrased questions can create discrepancies between what pollsters sought to measure and how audiences interpret the question, a phenomenon social science researchers call “demand characteristics.” This is worsened by the fact many pollsters provide only two possible answers to a question in lieu of a more representative and comprehensive Likert scale, eliminating the central tendency and artificially reducing a spectrum of responses towards dichotomous poles.

Drama seeking

The motives of the pollsters–and their sponsors-can be questionable, with tradeoffs between attention and accuracy. Not only are many polls commissioned by partisan groups with obvious biases, but some polling outfits also use provocative polling results to gain the prominence of stature and the expert academic authority that they lack. High-profile polls help lower-profile institutions compete commercially in the attention economy.

It is these methodological shortcomings and constraints which should draw greater attention rather than the breathless horse-race coverage based on who is up and who is down in the most recent polls across media.

As Jim Fallows notes, “if any professionals were as off base, as consistently, as political “experts” are, we’d look for someone else to do those jobs.”

Predictive political polling is helpful–as long as we bear in mind its constraints and limitations. There are enough known unknowns inherent to political polling methods, not to mention any unknown unknowns, which relegate them to more of an art than a science.

Without contrition, Nate Silver, one of the prominent pollster pundits who got it wrong was back at it this weekend, offering predictions on the Georgia U.S. Senate sweepstakes. The only lesson for pollsters seems to be: If you can’t predict accurately, predict often.

Department: Faculty Viewpoints
Topics: