Categories
Ahead of the 2008 U.S. presidential election, Copernicus Marketing Consulting and Research looked at the phenomenon of "polling failure" and response bias based on social desirability.
We’re getting down to the wire in the U.S. presidential election and all eyes are on the polls. Yet for all the talk about who's ahead and by how much, there’s even more talk about why the results of the polls are not all that valid. After so many embarrassing invalidations of polling work, everyone—from the candidates, to the media, to the pollsters themselves—routinely hedges the numbers with one explanation or another about why we shouldn’t be too surprised if the outcome on election day looks nothing like what the polls prognosticated.
Throughout history, there have been many explanations for “polling failure.” In 1948, for instance, the Readers Digest poll—at the time the most prestigious in the nation—predicted that Thomas Dewey, the Republican, would win big in the presidential race against Harry Truman, the Democrat. Truman, of course, ended up winning and the perception of polling as a reliable resource hasn’t been the same since. In this case, the reason the polls were so off was the unrepresentative nature of the Readers Digest sample. As it turned out, the magazine’s readers and poll sample were more Republican than the rest of the country.
These days most polling firms—including Gallup, Pew, USA Today, Zogby, and Rasmussen—use a relatively straightforward polling technology: they ask people a single question about who they plan to vote for in the next presidential election. They present the options and let respondents decide. They do this on the phone, sometimes in person and increasingly over the internet. Each respondent is then weighted by his/her self-reported probability of voting or estimated probability of voting based on past returns for the region/state/city/zip code and/or demographic profile they exhibit. The more sophisticated the weighting, the more accurate the poll.
While we haven’t come across too many people taking issue with the sampling and weighting of 2008 presidential polls per se, many have wondered aloud about how honest potential voters are with pollsters when it comes to their intention to vote for Barack Obama. As FOXNews put it, “white voters are hesitant to tell pollsters they won’t vote for a black candidate.” To put it in research speak, respondents are “overstating” [sounds a lot nicer than bold -ace lying, no?] their support. The reason for the overstatement is hardly a mystery—it’s often called social desirability response set which might be described as the generalized tendency for respondents in an interview to say nice (aka desirable or politically correct) things about themselves to look good and to avoid saying negative, politically incorrect things.
Anytime we have an opportunity to say something nice about ourselves—we watch PBS, brush and floss our teeth three times a day, and have sex 10 times a week—we tend to take it. Conversely, if we have the opportunity to under-report negative or socially undesirable attitudes and behaviors—we hardly drink at all and certainly never watch the Home Shopping Network—we’ll take that too.
Pollsters know about this bias but have ignored such effects because they’re not that important in a typical poll or election. In most political races, there isn’t a socially desirable or politically correct response—there’s just candidate A, candidate B, or none of the above. They might as well be asking if you prefer Coke or Pepsi, white, wheat or rye bread. But in some races—Obama v. McCain as an example—there just might be a “right” answer respondents feel they need to give.
The real mystery here is why pollsters aren’t checking for the existence of response bias this year. By asking respondents who their best friend is going to vote for, for instance, they could gauge the level of bias present. Pollsters could also do their interviews in person in locations around the country where people can actually go into a simulated voting booth. In these cases, the prospective voters are convinced of their anonymity and are much more likely to express the truth than they are when responding to a stranger on the telephone. The latter method is more expensive to execute than conventional polling, but much more likely to yield valid answers particularly where social desirability response bias is likely to occur. Given the drubbing polls have taken in recent years for inaccuracies and lack of predictive capabilities, it may not be a bad idea for pollsters to consider.
-October 2008
This content was provided by Copernicus Marketing Consulting and Research. Visit their website at www.copernicusmarketing.com.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
67k+ subscribers