Loading...

About Those Pollster Report Cards, Part III (Response to Blumenthal ‘National Journal’ Column)

The Experts on Google Consumer Surveys - 02/28/08 04:19 PM

Charles Franklin (left) and Mark Blumenthal receive the Warren Mitofsky AAPOR Innovator’s Award from Mitofsky’s widow, Mia Mathar, on 05/19/07. Photo by Steve Everett.

Mark Blumenthal, writing in today’s 02/28/08 National Journal, “Telling Good Polls From Bad,” wonders for the third time about which polls and which pollsters ought to be included in any “Pollster Report Card.” I have for two weeks owed Blumenthal a response to his initial question about whether pollsters who release polls closer to Election Day have an inherent advantage over pollsters who are unable to do so.

This post will just begin to answer that question. More thoughts another day.

To start:  This is a fascinating question.

As part of a much larger effort by SurveyUSA to establish a relationship between Election Poll methodology and Election Poll accuracy, SurveyUSA has begun in 2008 preparing what we refer to in-house as an “All by All by All” reckoning. Meaning: “All Pollsters x All Error Measures x All Methodologies.” This is reasonably simple to conceptualize, murder to implement. And that’s why it has never been done before, not even to my knowledge by Charles Franklin, who is the Baryshnikov of pollster statistics, and, with Blumenthal, winner of the 2007 Warren Mitofsky AAPOR Innovator Award.

Two charts follow. The charts do not prove or disprove anything. Rather, the charts are a first glimpse at what happens when you tighten and relax the criteria for which polls to include in a Pollster Report Card.

Both charts are from the Iowa Republican Caucus on 01/03/08. The first chart includes 11 pollsters who released data in the 17 days prior to the Iowa Caucus. The second chart includes a subset, just the 6 pollsters who released data during the 4 days prior to the Iowa Caucus. (Christmas and New Year’s complicated the scheduling of IA caucus polls for all research firms this election cycle.)

The orange boxes highlight which pollster was the #1 rated, or most accurate, for a given Error Measure. The chart is sorted with the polls released closest to the election on the left, and the polls released furthest from the election (the most number of days before the election) on the right. The word “SORT” in red type with a red arrow, highlights the sort row.

Scholar Jon Krosnik, quoted in Blumenthal’s National Journal column, today, says this:

Most election pollsters believe that surveys done long before Election Day are not intended to predict election outcomes, but they would agree that final pre-election polls should predict election outcomes well.

If Krosnik and conventional wisdom are correct, the polls on the left side of the chart, in general, and on average, should be lit-up in orange, and the polls on the right side of the chart should not.

Instead, in Iowa, ABC News, which had the oldest poll included in the Iowa comparison, was Most Accurate, or tied for Most Accurate, by 4 Error Measures. Research 2000, polling for KCCI-TV in Des Moines, which had the 8th oldest (or: 4th most stale) set of data, was the most accurate, or tied for most accurate, by 4 Error Measures.

(keep clicking on the image until it enlarges and is legible; the chart has footnotes)

All x All x All IA GOP Including 11 Pollsters 17 Days from Caucus

Restating: this proves nothing, disproves nothing. It’s just a datapoint – one of many that SurveyUSA will post as quickly as we are able to finish proofing the data in the grids.

When you look at the subset of 6 pollsters in Iowa who released data 4 or fewer days before the GOP caucus, and exclude the 5 pollsters with older data, the orange boxes light up in an entirely different way.

(keep clicking on the image until it enlarges and is legible)

All x All x All IA GOP Newest Data 6 Pollsters 4 Days Before Caucus

In this view, Selzer & Co, polling for the Des Moines Register, is most accurate, or tied for most accurate, by 5 Error Measures. But Selzer had the stalest, least fresh, data of any pollster in this sub-collection.

The real goal of this analysis is not to embarrass any one particular pollster in any one particular election.  On any one particular contest, it is possible for one pollster to be wrong, and/or it is possible for all pollsters to be wrong. Many of the nation’s top pollsters will gather in New York City tonight, for a symposium titled, What Happened in New Hampshire?

Instead: The real goal of this analysis is to examine whether polls with traditional methodology are consistently more accurate than polls with non-traditional methodology.

Much about polling is written every day to perpetuate the belief that only certain public opinion polls are worthy of examination. And the truth is: only certain polls are. But the criteria used today to separate the worthy polls from the unworthy polls is pitiful.

Pollster.com‘s “Disclosure Project,” which is a heroic and noble undertaking, gets us half way to some real learning. The Disclosure Project begins to look at, precisely, which pollsters are doing what and how.

But Blumethal, who is a gentleman and a diplomat, and who maintains cordial relations with just about everyone, does not yet take it the rest of the way, which is to explore:

“In the end, does methodological orthodoxy make … all the difference? Some difference? Or no difference? … when conducting a pre-election poll?”

SurveyUSA’s analysis, begun here, builds upon and incorporates the learning from Pollster.com’s Disclosure Project, to search for the correlation, or lack thereof, between “best practice” and “best outcome.”

In theory, “best practice” should consistently lead to “best outcome.”

In practice, the world is not so neat.

Let’s see what the data actually teach us.

Jay H Leve
Editor
SurveyUSA
editor@surveyusa.com

 

=

Postcript: If you identify a mistake in this or any SurveyUSA table, please bring it to my attention. Separately: some cells in the tables above contain a “?” That means SurveyUSA has not yet found where the pollster has disclosed this information. As soon as we are able to replace each “?” with the correct value, we will do so, proactively and retroactively.

Copyright 2014 SurveyUSA®, Clifton NJ, all rights reserved. Terms & Conditions.