I quite enjoyed reading Rick Perlstein’s critique of the polling and political prognostication profession, but I’m a little resistant to adopting his “dime store Buddhism” approach to predicting the winners of our elections. While I can agree that many of the most famous pollsters throughout history can properly be called “assholes” for their refusal to own their mistakes and their willingness to blame others for the failures, I still see value in what they do. And I do have one complaint about Perlstein’s otherwise comprehensive treatment of that history. He doesn’t even discuss the challenge presented to pollsters by the Electoral College.

It’s difficult enough to get a good polling sample and then to weight it correctly, but sussing out that the loser of the popular vote is actually going to win the election is another ball of wax. There’s also the example of John Kerry, who would have joined George W. Bush and Donald Trump as victorious popular vote losers if only he had overcome a massive voter suppression effort by Ohio’s Secretary of State Kenneth Blackwell who deliberately created massive lines in urban centers and college towns. Are pollsters supposed to weight for that?

In 2024, I think people will be genuinely shocked if Trump wins the popular vote, but his chances of winning the Electoral College look like even money, and calling that correctly is probably more luck than science given how much hinges on multiple likely 50-50 outcomes. And that gets me to the value of the poll aggregator method. It’s true that deciding which polls to exclude or adjust for obvious political bias is difficult, and it’s also true that pollsters often display a herd mentality that leads them to calibrate their results toward the consensus for fear of being an outlier. While aggregating polls doesn’t involve magical thinking, that’s a good term to describe believing in the method’s infallibility. Yet, one thing aggregating does accomplish well is showing movement.

This is for two reasons. First, by doing some work to minimize the effectiveness of gaming the system with dishonest polls, the method properly limits the noise. The honest pollsters are consistent with how they weight their results, so that even if their weighting is badly off, differences between their August survey and their September survey have validity. And when we see that movement confirmed by multiple outfits, we can have some confidence in how an election is developing and how certain events are affecting the electorate.

Of course, the fear of being an outlier can mute the signal of movement. Who wants to be the first to say that the electorate has strongly tilted? It’s probably built in that aggregations of polls will underestimate strong shifts and present them as modest. In other words, honest polling outfits are consistent until they find themselves too far off on an island.

Internal polling is subject to similar risks. I believe that Mitt Romney’s pollsters consistently weighted their surveys with optimistic assumptions that just weren’t justified. They wound up genuinely convincing Romney that everyone else was wrong and they were right, and he was astonished when he lost. But on the whole, I think internal polls are probably more accurate precisely because their primary purpose isn’t to predict but to direct time and resources and to evaluate the effectiveness of the campaign. Did our most recent trip to North Carolina move the needle? Is our big ad campaign in the Philly suburbs working as planned?

Of course, we in the public only see internal polling when a candidate wants us to see it, and sometimes a dishonest internal poll is produced for the express purpose of impressing donors or garnering good press coverage. That’s different from what campaigns use to gauge the mood of the electorate and explains why aggregators are suspicious of internal polls.

One thing I’ll say in defense of Nate Silver is that he’s correct when he says people don’t really understand odds. If we made a coin that had a 75 percent chance of landing on tails, how much money would you bet on it coming up heads? Or tails? Would we really fault someone who told us the odds for being wrong about the outcome? Would that even make sense?

There are some things we can take from polls with some confidence. I think it’s a safe bet that Kamala Harris is doing better against Trump than Biden was. I think the election is tick-tight in the so-called swing states. Whether Harris has a 65 percent chance of winning or losing is kind of a meaningless prediction given that there will be only one election, just as predicting heads or tails is meaningless on any given coin flip. And, finally, I think when we see movement in the aggregators, we can believe that there is movement.

There are some other things political surveys can detect. It could be that Harris is doing poorly compared to Biden is places where the Democrats did poorly in the 2022 midterms, and this will bring down her popular vote advantage without having any meaning for the Electoral College. For example, the Democratic Party in New York couldn’t be more of a mess, especially with the mayor of New York being indicted. Trump could rack up bigger margins in the Deep South. Some of this will be useful for predicting the outcome of House races even if it tells us nothing about who will be our next president.

In any case, hating on pollsters and polling aggregators is fine, but I wouldn’t support flying blind without their input. What matters is knowing how to interpret their output.

0 0 votes
Article Rating