Nate Silver: “But be wary of analysts who try to take this complexity argument too far. It’s true that any snapshot of public opinion from a poll is imperfect. It is not true that if you scratch only under poll results deeply enough, you will discover the American people actually take a clear side on abortion. By any objective measure, the country is conflicted.”
The Hill: “Gallup has partnered with the University of Michigan for a top-to-bottom review of its operations. In a written post-mortem, the pollster said it’s not averse to making “major revisions or even a replacement model” if needed to produce more accurate data.”
Joshua Green points out how many have claimed the Obama re-election campaign’s models proved more accurate than traditional tracking polls in the run up before the election.
“A number of Obama vets repeated this claim to me, so I asked them to provide some evidence to back it up, and they did. Here, for the first time, is a chart based on internal data that shows how the Obama campaign’s swing state model performed against the much maligned Gallup poll over the last several months of the race.”
Join the Ipsos Survey Panel and win prizes.
The National Republican Congressional Committee “is moving to reboot its polling operation after a messy 2012 cycle, the first concrete remedy taken by the Republican side since candidates and outside groups were left stunned on Election Day by results that their internal data never came close to predicting,” Politico reports.
The NRCC “is the first GOP entity to take specific steps to try to rectify the party’s widely acknowledged polling debacle. Republican strategists confirmed after the end of the 2012 race that a huge slice of their survey data was based on flawed assumptions, and failed to anticipate the diversity and scale of turnout on the Democratic side.”
Mark Blumenthal takes an extensive look at why Gallup’s polling was so wrong during the 2012 presidential election.
Nate Silver told Student Life that he might stop doing his election forecasts after the 2014 or 2016 elections should his projections actually influence the elections’ outcome.
Said Silver: “The polls can certainly affect elections at times. I hope people don’t take the forecasts too seriously. You’d rather have an experiment where you record it off from the actual voters, in a sense, but we’ll see. If it gets really weird in 2014, in 2016, then maybe I’ll stop doing it. I don’t want to influence the democratic process in a negative way.”
A new study “makes a startling suggestion about so-called ‘robo-polls’ in the 2012 Republican presidential primaries, raising the question of whether these automated surveys may have been adjusted to match live-interviewer polls,” Gary Langer reports.
Key excerpt: “There is no difference in the accuracy of IVR polls and human polls when IVR polls occur after a human poll, but IVR polls do significantly worse if human polls are not conducted first. The apparent equivalence of IVR polls and human polls in the 2012 Republican primary appears to depend on human polls being conducted prior to the IVR polls.”
That said, the authors admit they “did not test the reverse
possibility, that traditional polls were altered to match automated
Gallup will no longer be conducting polls for USA Today, the Washington Post reports.
“It’s not clear at this point which side broke off the arrangement or what the reason was” but USA Today says it “is in the final stages of negotiating an arrangement with another polling organization.”
Take part in polls (and possibly win prizes) by joining the Ipsos Survey Panel.
Harry Enten says there are five polling lessons we should take from the 2012 election season:
1. When likely and registered voter polls disagree in high turnout elections, you should usually go with the registered voter surveys.
2. Cellphones are generally needed for an accurate telephone poll.
3. Internet polling is the wave of the future.
4. Internal polls published publicly generally should not be trusted.
5. When state and national polls disagree, you should generally go with the state data.
Harper Polling launches this week with the goal of putting the Republican party “on parity with Democrats in the field of IVR polling – a term that stands for interactive voice response polling, commonly known as ‘robo-polling,'” Politico reports.
“For several cycles now, Democrats have benefited from a high-volume, relatively inexpensive flow of survey data from the company Public Policy Polling, which takes hundreds of polls in any given cycle checking up on individual races and national issue debates. Some of those surveys are released to the public, while others are conducted for private purposes by Democratic campaigns and interest groups.”
Mark Blumenthal: “What made the exit polls especially challenging this year is that Edison Research, the company that conducts the exit polls on behalf of the National Election Pool (NEP) consortium of the five television networks and the Associated Press, is in Somerville, N.J. It was directly in the path of Hurricane Sandy, and nearly knocked out of business by the storm at a critical moment in its preparations.”
“The biennial exit polls are an extraordinary undertaking under normal circumstances … Altogether, Edison reports that more than 3,000 interviewers collected nearly 120,000 interviews of Americans who voted in 2012… This year, Hurricane Sandy helped make that final week far more challenging than usual.”
Nate Silver finds that some of the most accurate polling firms this election cycle were those that conducted their polls online.
“The final poll conducted by Google Consumer Surveys had Mr. Obama ahead in the national popular vote by 2.3 percentage points – very close to his actual margin, which was 2.6 percentage points based on ballots counted through Saturday morning. Ipsos, which conducted online polls for Reuters, came close to the actual results in most places that it surveyed, as did the Canadian online polling firm Angus Reid. Another online polling firm, YouGov, got reasonably good results.”
“Perhaps it won’t be long before Google, not Gallup, is the most trusted name in polling.”
The Votemaster: “Enough presidential polling data is now available to analyze Rasmussen’s data… Averaging all 82 polls, Rasmussen’s mean bias is -1.91 points, that is, Rasmussen appears to be making Obama look almost 2 points worse than the other pollsters.”
John Sides looks at new research which finds “the errors of the robo-polls were much lower when a live-interviewer poll had already been conducted in a particular state. In other words, the robo-polls were more accurate when there was a previous live-interviewer poll that may have served as a benchmark.”
From the paper: “Pollsters know their results are being compared to the results of prior polls, and polls created for public consumption have incentives to ensure that their results are roughly consistent with the narrative being told in the press if they want to garner public attention. Pollsters also have further financial incentives to get it right which may make them leery of ignoring the information contained in other polls…”
“Beyond the implications for interpreting IVR polls, the larger point here is that if polls take cues from one another, then the hundreds of polls being reported are not really as informative as the number of polls would imply.”