Saturday, Sept. 20, 2008 | Like the 100-meter butterfly and beach volleyball, the arcane science of polling is something most Americans could care less about almost all of the time, except for a few weeks every four years. We are in one of those periods now.

Until the first Tuesday in November, the results of the latest polls tracking the contest between John McCain and Barack Obama will be a national obsession. And local races, including the highly contested city attorney’s race, will be the talk of the town. Front pages and newscasts lead with the numbers from funny-named organizations like Gallup and Rasmussen. And terms like “margin of error” and “likely voters” are being uttered by people without pocket protectors.

John Nienstedt, the president of the San Diego-based Competitive Edge Research & Communications, obsesses over such things all the time. He took time recently to give us a primer of sorts on the mechanics of polling.

What would you say is the most misunderstood aspect of public opinion polling?

The main thing that is misunderstood is the idea that all polls are the same. Everybody knows that if you do a 300-person sample it will be less accurate than a 1,000-person sample. But that is about the extent of their knowledge. But for instance, a live survey (using professional interviewers) will be more accurate than something done by an automated company like Survey USA

And there is no substitute for good questions. You have to ask the right questions, and you have to ask the right people. There is a whole body of experience that good pollsters have built up over the years — sampling, questionnaire design, methodology — that separate the good polls from the not so good polls.

Can you give me an example of a good question versus a bad question that you see in polls?

I will give an example from the recent mayoral and city attorney races. Survey USA showed the mayor’s race being a dead heat at times, and they also showed [Mike Aguirre] and Scott Peters beating Jan Goldsmith in the city attorney’s race. (Goldsmith ended up beating Aguirre in the primary by three points and Peters by 12 points).

One of the reasons for that was they did not use the employment designations that appear on the ballot. In some cases that doesn’t matter. But in the city attorney’s race everybody knows who the city attorney is, it’s Mike Aguirre … he is one of the most high-profile city attorneys we’ve had in the history of San Diego. But when it came to Jan Goldsmith, he is not as well known, so his ballot label — which was San Diego Superior Court judge — is much more important. If you don’t have that in there, he is going to be underperforming relative to his true share of the vote. There was a very, very crystal clear example of a question that was poorly designed, poorly worded. It did not replicate what was on the ballot.

But these robo-polls seem to be getting more prolific. What do you think of them? Is live polling becoming a thing of the past?

Polling using the machines has got to be cheaper than polling with live people. But it is less accurate most of the time. And yes, there are situations where Survey USA gets it right. But I have noticed, over the years, like in 2000 when [Survey USA] called Ron Roberts as next mayor, and it wasn’t close.

In 2004, there was a similar situation in that the primary for mayor — they had Steve Francis up by eight points (a week before the election) against Jerry Sanders. Here is the proof in the pudding: Go back to any robo-call survey and you will find fewer undecideds (2 or 3 percent) in that survey. So much so that it is impossible. The reason why is the only people who participate in those surveys are the people who are really charged up. Who else is going to want to talk and give their opinion to an automated surveyor?

One would think that a poll of likely voters is more accurate than a poll of registered voters. Do you think that will be the case this year given the predictions of a record voter turnout?

A good way to look at this is if there is high turnout then the base of registered voters is more appropriate. But if turnout is substantially smaller, then likely voters become more appropriate.

There are situations where there is a big difference — we call it “turnout differential” — between a high-turnout election and let’s say a moderate turnout election. This is what we are seeing here in the (presidential) race, which shows McCain doing much better among likely voters, and Obama doing much better amongst just pure registrants. You have a dynamic that clearly suggests a higher turnout will help Obama. But, something your readers should know about, is that every national survey that is conducted, is conducted using the random digit dial sampling method. Random digit dial is where they buy a list that has been professionally designed, but it is just randomly selected phone numbers based on a whole set of algorithms.

They are fine samples as far as being random. But when you call up the first question is: are you registered to vote? At that point, the pollster is relying on the accuracy and the honesty of that individual. Then the second question is: how often have you voted in the past? Again, the pollster is relying upon the honesty and recall of the respondent. Well, what does this mean in practical terms?

It means that those national polls tend to be fraught with what we call “social desirability bias;” which is: “I want to give the socially desirable answer, so I’m gong to tell someone I’m registered to vote even though I might not be.” So this can result in error. And is this reflected in the margin of sampling error? Absolutely not. The margin of error is purely related to the sample size.

How much does this social desirability bias add to the overall error in a poll?

Unknowable. And it’s not just social desirability. There are question order issues. There is respondent availability. There are all sorts of things. We do our best to try and combat these things, so the only thing left is that margin of sampling error, right? But it is unknowable. So you have to be aware of all these pitfalls.

This brings me to my next question. There always seems to be confusion over what the margin of sampling error means in a poll. What does it mean if a candidate’s lead is within the margin of error in a particular poll? Often when there is a narrow spread between candidates they are reported as being in a statistical dead heat. Are they in fact in a statistical dead heat?

If you ask 20 different pollsters you will get 20 different answers on this. My response to the reading public is, the number [the pollster] gives you is the best estimate they can make for that candidate. The other number is the best estimate they can make for the other candidate. I think the term statistical dead heat is bogus. A statistical dead heat is when the estimates are exactly the same. That to my mind would be a dead heat.

There has been a lot of talk, especially as it pertains to the presidential race, about pollsters missing a group of voters because they are exclusively cell phone users, and their phone numbers are not available. What is the significance of this?

Generally a poppycock argument made by people who either don’t know what they are talking about or don’t want to believe the numbers. There certainly is the issue, and you have to be aware of it. Cell-phone-only users tend to be less likely to vote than people who have a land line. That is a dyed-in-the-wool fact that we see time and time again. First off, you are only missing 5, maybe 10 percent of the electorate. Heck, you are going to get refused on by 30 percent of the electorate. The amount of a problem there is pretty small.

Secondly, there has been no evidence that I have seen that people who use cell phone only are politically different than people with land lines except that they are younger. Now, younger people may tend to vote a certain way. The cure for that is to weight those younger people up. And as long as cell-phone-only people don’t vote dramatically different from people with land lines, then the weighting should handle any problems.

— Interview by DAVID WASHBURN

Leave a comment

We expect all commenters to be constructive and civil. We reserve the right to delete comments without explanation. You are welcome to flag comments to us. You are welcome to submit an opinion piece for our editors to review.

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.