Exclusive to Political Wire, Nate Silver defends his critique of Sam Wang’s election forecast. It’s definitely worth reading.
by Nate Silver
Editor in Chief, FiveThirtyEight
The most striking feature about the Senate forecasts right now is how much of a consensus there is between them. As of Thursday morning:
- FiveThirtyEight’s forecast had Republicans with a 59 percent chance of winning.
- Huffington Post put them at 58 percent.
- NYT was at 63 percent.
- Daily Kos was at 66 percent.
- Charlie Cook was at 60 percent.
- Predictwise was at 64 percent.
- Betting markets put them at about 67 percent.
- And Washington Post had them at 78 percent.
The exception is Sam Wang’s model, which is alone in having Democrats favored (on Thursday AM it had Republicans with a 38 percent chance of winning the majority). So this is really about “Sam Wang vs. the World” and not “Sam Wang vs. FiveThirtyEight”.
Why is Wang’s model so different than the others?
Wang has often claimed that the difference arises because his model uses polls only — whereas others models use polls along with other factors (“fundamentals”) to make their forecasts. But this explanation doesn’t hold up. Among the other models that I mentioned, Huffington Post’s and Daily Kos also use polls only. They show very different results from Wang’s. So do the polls-only Real Clear Politics averages.
So here’s my attempt at explanation. The short version: Wang’s forecast has Democrats ahead because it goes all the way back to June in looking at polls. The long-ish version is below.
Wang’s site offers both a “snapshot” — what he says would happen in an election held today — and a “forecast” — how he assesses the Election Day odds. The snapshot has fluctuated wildly: over the past two weeks, it’s shown everything from a 93 percent chance of a Democratic win to a 76 percent chance of a Republican one. The swings have been even wilder in individual states. His snapshot currently shows the Republican Dan Sullivan with a 94 percent chance of winning the Senate seat in Alaska. Only a week ago, it had Sullivan with just a 1 percent chance.
I’ve critiqued Wang’s snapshot probabilities in detail in the past. They’ve shown super overconfident results — for instance, having Sharron Angle with a 99.997 percent chance of winning the Senate seat in Nevada in 2010. This is because of a central methodological flaw: Wang estimates the uncertainty in his snapshots based on how much the polls differ from one another — and not how much they’ve differed from actual election results.
In reality, polls often err in the same direction. So even if you have a number of polls that show a similar results (as they did in Nevada in 2010) there is still the potential for a big miss. This is perhaps the single most important parameter of an election model, in fact: how much do polls differ from actual results? FWIW, in elections since 1998, the standard error of Election Day polling averages has been about plus or minus 5 percentage points. Polling averages are powerful instruments, but they’re a long way from perfect.
What about Wang’s “forecast” instead of his “snapshot”? Might his forecast be OK even if his snapshot has problems?
Here’s the problem: Wang’s forecast is based on his snapshots. And it uses them in a strange way. In particular, his forecasts are based on an average of his past snapshots since June. Since Wang’s is a “polls only” model, this is equivalent to looking at polls back to June.
Why go all the way back to June? Wang seems to have chosen the date arbitrarily. So how far back should you go? The way to answer this is by looking at the historical data. And the empirical answer is that, while you could debate about whether to go two weeks back or four weeks back or whatever else, you certainly ought not be looking at June polls when trying to forecast a race in October.
If you’d applied Wang’s technique in the past, you might still have had Republicans favored to win the Senate at this point in 2012 — even though the race had clearly broken toward Democrats by that time. Or to use a presidential example, you might still have had Michael Dukakis in a highly competitive race with Bush, even though Bush was well ahead by October 1988.
In football terms, it’s like asserting the Philadelphia Eagles are still favored even after the Dallas Cowboys score a touchdown to go ahead 21-17 because the Eagles had been ahead on average earlier in the game.
Importantly also, a lot of the polling before Labor Day was conducted among registered voters — and those polls tend to exaggerate how well Democrats will do. If you’re only using recent polls, this assumption doesn’t matter so much anymore since almost all polls report likely voter results now. But if you’re going all the way back to June, it’s a problem.
Furthermore, Wang’s model assumes the independent Greg Orman is 100 percent sure to caucus with Democrats if he wins in Kansas. That’s a strange assumption for a model that claims to make simple, neutral assumptions. The simplest assumption would be to take Orman’s word at face value, which is that he doesn’t know who he’ll caucus with in the event his choice would determine the majority (i.e. to treat his choice as 50/50).
All of these things tie together: minor-seeming assumptions matter more in Wang’s model because of its tendency to underestimate the uncertainty in the outlook. Perhaps on some day a few weeks ago or a few months ago, Democrats were slightly favored in a “polls only” model if you were making certain assumptions, like using registered voter polls and treating Orman as certain to caucus with them. The race has always been pretty close. The problem is that Wang’s snapshots will take a tiny advantage and turn it into being a 90 percent favorite to win the election. And those flawed snapshots from months ago are still getting averaged into his forecasts today.
Save to Favorites