Categories
October 8, 2020
Nearly half of your panel data is trash. Here is how to fix it.
Grey Matter Research and Harmon Research teamed up for the new study Still More Dirty Little Secrets of Online Panels. We fielded a pretty typical online questionnaire with five of the ten largest panel providers in the industry. But we set up a variety of traps, tests, and quality control measures, and the results were pretty disturbing.
First, the proportion of respondents we tossed out of the study for having serious problems was 46%. Just to be clear, these weren’t respondents who had one mistake or a verbatim that was too short. Nearly half of our respondents either had a problem that was so egregious they obviously needed to go, or they had multiple problems – as in four or more in a ten-minute questionnaire.
Second, Harmon Research fields surveys with tens of thousands of online panel respondents each month. In their experience, at least 90% of researchers are not taking sufficient steps to ensure online panel quality. Most throw out respondents with gibberish open-ends (e.g. “kukn;lkjij;lk”) and eliminate people who are completing 15-minute questionnaires in three minutes, but in Harmon’s observation:
Researchers have been aware for a long time that disengaged and fraudulent respondents (including bots and click farms) are a problem in online panel surveys. But we wanted to quantify just how much of a problem this is, and exactly what impact it may have on your data. Still More Dirty Little Secrets of Online Panels answers both questions.
We’ve already stated that nearly half the panel respondents couldn’t pass our quality check. But just what type of impact are these bogus respondents having on your survey data?
We took the 880 respondents we eliminated from the study and compared them with the respondents we kept. Here are just a few examples of the differences between the two groups:
First, are you comfortable with nearly half your data being utter trash? If you’ve read this far, I’ll assume the answer is no.
Second, and most importantly, what are you going to do about it?
The responsibility does not rest only with panel companies. They are under intense pressure from clients to provide data faster and cheaper every day, and that does nothing to foster quality. Tens of thousands of researchers and marketers rely on online panel research, and that’s not likely to change any time soon. The problem is that many clients are either 1) in charge of the data collection themselves and not taking the necessary steps to ensure respondent quality, or 2) handing the data collection off to a vendor and just assuming that the vendor is responsibly taking those critical steps. Probably more than nine out of ten are not.
The key is that this is not something that can be solved after the data is collected. It has to be addressed before the study reaches the field. Quality assurance must be baked into the study design. You need programming instructions that terminate people from the questionnaire when there are egregious problems (and ways to identify those egregious problems). You need questionnaire design that includes traps and measures to determine whether a respondent is valid or not. Then during and after the field, you need an intensive data review. If you’re not comprehensively handling it before, during, and after the field, you’re not really handling it at all.
Different types of studies may require different types of quality measures. Speeding problems are more obvious on a 15-minute questionnaire than a 5-minute questionnaire; screen timers don’t work well on short questions; straightlining isn’t an issue on questionnaires with no grids.
We also have found some quality measures to be consistently much better than others. For instance, we rarely use red herring questions because they’re too easily identified by bogus respondents and even bots. Our latest study showed many of the people who correctly answered the red herring questions were tossed out for other major problems.
There are various ways you can combat the problem of bogus respondents, and the best formula is for multiple methods to work together in concert. This is covered more in-depth in the full report (which is available without cost – contact [email protected]). It’s not perfect, but it’s far better than getting brand awareness figures that are inflated by 400% or getting critical feedback on a concept respondents didn’t even read. Unfortunately, there’s a really good chance you’re getting a lot of that in your panel studies.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Ron Sellers
It’s amazing what some people will do in order to make a buck-fifty. Two recent studies have brought to light how sophisticated panel fraud has become...
When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.
Why should panel companies improve their results when clients accept the status quo and won’t pay for better?
I often wonder whether, in research, we spend so much time navigating the complexities of gathering the data that we neglect the all-important field o...
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.