I have a few San Diego friends on Facebook. All of them surf. This means everyone in San Diego surfs, right? Well, no, of course not.

Here’s what I did wrong. I checked with a few guys ranging in age from 30 to 45 who use Facebook, and committed what pollsters call a “sampling error.” I got a result that was true for my subgroup, but doesn’t reflect the population as a whole.

In San Diego’s special mayoral election, the most heavily reported political poll was conducted by SurveyUSA, a national firm with a strong reputation. Their polls on the eve of the election showed a virtual tie, but come Election Day, the clear double-digit winner was Kevin Faulconer.

The result was so out of line that the pollster himself said it was “sobering,” but then blamed the voters for not turning out in the numbers they said they would.

To find out how this happened, we can look to how they contacted people and who completed their poll.

The SurveyUSA Poll was based on random phone calls to people who live in San Diego. They asked if the respondent was a voter, whether they were likely to vote and who they were supporting. Out of the gate, this could pose some problems. Who among us wants to admit we’re not registered or not going to vote? Poll respondents often won’t give answers they believe are less socially acceptable.

This method cast too wide a net. They had way too many young people and not enough senior citizens — a population that ended up comprising more than a third of the voters. And, most damaging, in a race where Latinos were strongly favoring one candidate, SurveyUSA’s sample projected there would be record high Latino turnout, yet actual Latino turnout was near record lows.

The result? Their poll reported the likely vote if the electorate had been a different blend of young, old, white and Latino voters than actually showed up on Election Day. They got it wrong all because of a bad sample.

My company provides data for pollsters — but instead of random phone numbers, we give lists of registered voters that we’ve determined have a strong likelihood of voting. That’s important because likely voters vary greatly from what you’ll get by randomly calling people.

In this mayoral runoff, as in the primary, our samples hit dead-on with the actual percentages of the partisan breakdown, as well as the percentages of young, older and Latino voters.

For the most part, polls using our data are kept private to the campaigns.  So, while SurveyUSA’s data was telling the public one story, the campaigns likely knew quite another.

Keep this in mind the next time you see a polling result. It’s always worth checking to see how the poll was done before jumping to conclusions.

Paul Mitchell is vice president of Political Data Inc. Mitchell’s commentary has been edited for style and clarity. See anything in there we should fact check? Tell us what to check out here.

Catherine Green

Catherine Green was formerly the deputy editor at Voice of San Diego. She handled daily operations while helping to plan new long-term projects.

Leave a comment

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.