The Crisis in Issue Polling
Nate Cohn: “If you do this exercise for previous elections, issue polling failures look more like the norm than the exception. There just aren’t many elections when you can read a pre-election poll story, line it up with the post-election story, and say that the pre-election poll captured the most important dynamics of the election…”
“With such a poor track record, there’s a case that ‘issue’ polling faces a far graver crisis than ‘horse race’ polling. I can imagine many public pollsters recoiling at that assertion, but they can’t prove it wrong, either. The crisis facing issue polling is almost entirely non-falsifiable — just like the issue polling itself. Indeed, the fact that the problems with issue polling are so hard to quantify is probably why problems have been allowed to fester. Most pollsters probably assume they’re good at issue polling; after all, unlike with horse race polls, they’re almost never demonstrably wrong.”
Traditional Indicators May Be Misleading
Join now to continue reading.
Members get exclusive analysis, bonus features and no advertising. Learn more.
The Polls Weren’t That Bad
Join now to continue reading.
Members get exclusive analysis, bonus features and no advertising. Learn more.
This Poll Is Obviously an Outlier
Join now to continue reading.
Members get exclusive analysis, bonus features and no advertising. Learn more.
Old Metrics No Longer Work in Predicting Elections
Amy Walter: “When I first started covering politics some 25 years ago, the following data points were considered the gold standard to understanding the trajectory of an election cycle: a president’s job approval rating, the overall mood of the electorate (namely, is the country headed in the right direction or wrong direction) and opinions about the economy. Taken together, they told the story of a president or party in peril, or a president and party that were headed to reelection.”
“Over these past few years, however, these questions have become less determinative.”
FiveThirtyEight Fighting
Nate Silver, the old head of FiveThirtyEight, is fighting with the new head of the data analytics website, G. Elliot Morris.
Pollsters Worry the Trump Problem Is Back
Politico: “The polling industry whiffed every year Trump has been on the ballot. In 2016, Trump upset Hillary Clinton to win the presidency. And after spending four years trying to fix what went wrong, the polls were even worse in 2020. Trump ran far more competitively with now-President Joe Biden than the preelection surveys suggested.”
“Pollsters are breathing a sigh of relief after largely nailing last year’s midterm elections. But presidential years have been a different story in the Trump era.”
“And now, with Trump expanding his lead over his GOP primary rivals, pollsters are fretting about a bloc of the electorate that has made his support nearly impossible to measure accurately.”
When a Poll Goes Viral, It’s Worth Double Checking
Join now to continue reading.
Members get exclusive analysis, bonus features and no advertising. Learn more.
The Most Accurate Polls in 2022
FiveThirtyEight is out with updated pollster ratings after the 2022 midterms and finds Suffolk University and Siena College/New York Times Upshot were the most accurate pollsters, while several GOP-affiliated firms were the least accurate.
FiveThirtyEight Bans Pollster Who Bet on Elections
FiveThirtyEight has banned any polls coming from former polling firm executive director Sean McElwee after an investigation brought to light that he’d bet on elections.
How the Midterm Forecasts Performed
Nate Silver: “Let’s get this out of the way up front: There was a wide gap between the perception of how well polls and data-driven forecasts did in 2022 and the reality of how they did … and the reality is that they did pretty well.”
“While some polling firms badly missed the mark, in the aggregate the polls had one of their most accurate cycles in recent history. As a result, FiveThirtyEight’s forecasts had a pretty good year, too. Media proclamations of a ‘red wave’ occurred largely despite polls that showed a close race for the U.S. Senate and a close generic congressional ballot. It was the pundits who made the red wave narrative, not the data.”
What Voters Thought In the 2022 Midterms
The 2022 Collaborative Midterm Survey from Cornell University — a poll of nearly 20,000 U.S. adults with large oversamples in California, Florida, and Wisconsin — was just published.
An interactive feature allows you to really dig into the results. Highly recommended.
How Skewed Polls Fed a False Election Narrative
New York Times: “Traditional nonpartisan pollsters, after years of trial and error and tweaking of their methodologies, produced polls that largely reflected reality. But they also conducted fewer polls than in the past.”
“That paucity allowed their accurate findings to be overwhelmed by an onrush of partisan polls in key states that more readily suited the needs of the sprawling and voracious political content machine — one sustained by ratings and clicks, and famished for fresh data and compelling narratives.”
“The skewed red-wave surveys polluted polling averages, which are relied upon by campaigns, donors, voters and the news media. It fed the home-team boosterism of an expanding array of right-wing media outlets — from Steve Bannon’s ‘War Room’ podcast and ‘The Charlie Kirk Show’ to Fox News and its top-rated prime-time lineup. And it spilled over into coverage by mainstream news organizations, including The Times, that amplified the alarms being sounded about potential Democratic doom.”
Sam Bankman-Fried Takes Out a Pollster
Politico has a deep dive into the fallout at progressive pollster Data for Progress after the departure of Sean McElwee amid allegations of misconduct.
The (Nonpartisan) Polls Were Pretty Accurate
The Bulwark: “The nonpartisan polling was actually pretty good in 2022. Most of the phantom Republican strength in pre-election statewide polling was a function of junk firms with poor data quality and low transparency spamming the polling averages with bad polls.”
“In reality, an aggregation of nonpartisan polls predicted the correct winner in every Senate battleground and would have predicted the margin substantially more accurately than the partisan GOP pollsters which flooded the averages in almost every major race.”
How the Midterm Polls Performed
Wall Street Journal: “Across the eight most competitive races, Democrats on average did about three points better than the final poll averages calculated by Real Clear Politics. And a number of those averages camouflage a wide disparity among individual polls.”
Are the Polls Still Missing ‘Hidden’ Republicans?
Nate Cohn ran an experiment where he paid Wisconsin voters to complete a poll.
“The data is still preliminary, and it will probably take at least six months, if not longer, before we can reach any final conclusions. But there is one immediate difference between the two groups, and that is in the polls’ response rates: Nearly 30 percent of households have responded to the survey so far — a figure dwarfing the 1.6 percent completion rate in the parallel Times/Siena poll.”
“That said, an initial glance at the topline findings may be sobering for anyone who hoped that $25 and higher response rates would break through to reach the coveted ‘hidden’ Trump vote. While there were important differences between the high- and low-incentive surveys — including some that hold promise for improving Times/Siena surveys and others going forward — there was not necessarily obvious evidence of a breakthrough to a vastly different pool of respondents.”
- « Previous Page
- 1
- 2
- 3
- 4
- 5
- 6
- …
- 19
- Next Page »