Research Methodologies

August 21, 2020

Panel Quality Stinks and Clients Are To Blame

Why should panel companies improve their results when clients accept the status quo and won’t pay for better?

Panel Quality Stinks and Clients Are To Blame

There are two truths in the consumer insights industry today:

  1. Online panel respondent quality stinks.
  2. It’s not the fault of the panel companies – it’s the fault of their clients.

Now that I’ve written an introduction that manages to anger both vendors and clients, let’s look at both sides of this equation and see if I can’t really enrage everyone.

 

Panel Respondent Quality Stinks

The quality is, to put it mildly, just atrocious.

You know the problems:  duplicates, speeders, cheaters, bots, straightliners, and more.  I’ll give you just a few quick examples (we detail these more in-depth in our report 6 Ways Your Survey Research May Be Misleading You).

 

The Fake Charity

In a recent panel study, we asked people about the charitable organizations they support.  We included a ringer; a non-existent organization with such an absurd fictitious name that it would be impossible to confuse it with a real organization.  Yet hundreds of people – over 13% of the respondents to this question – claimed they financially supported that organization.  In other words, they lied – either intentionally hoping to qualify, or because they were just clicking random answers.

 

Guessing

In the same questionnaire, we asked qualified respondents to read a paragraph about a charitable cause and answer a couple of questions.  The paragraph was 143 words.  (Barely skimming it took me about 10 seconds; reading it fully would take the typical adult about 29 seconds).  We prefaced the paragraph by asking respondents to read it carefully.

Unfortunately, 29% of respondents clicked through that screen in under ten seconds, with most of them doing so in five seconds or less.  Then they answered questions about a stimulus they hadn’t read.

 

Speed “Readers”

On a different online study, we had to test three different messaging statements, each with an average of 29 words.  We calculated that a minimal read-through of all three statements would take no less than 15 seconds, so we set the threshold 33% lower than that, at ten seconds.  Fully 47% of all respondents skimmed through the messages in ten seconds or less (many considerably less), clearly demonstrating that they weren’t even reading the messages they were being asked to rate.

 

Random Answers

And this is to say nothing of people who answer “good” or “I like it” when asked to describe how they chose what realtor to work with, or those who answer “12345” when asked how much money they spent on clothing in the last year.

 

Now, you might assume the questionnaires were too long and complex, which is a common problem.  Of the two projects described above, one was under 20 minutes; the other was right at ten minutes.  Our field partner described Grey Matter’s studies thus:  “Your surveys are simple and short, and we see some of the lowest dropout rates (with yours).”  (Believe me, that’s by design – it’s not fair to subject respondents to complex, difficult questionnaires and then complain when their interest wanders.)

We’re just about to release another research-on-research report, Still More Dirty Little Secrets of Online Panels – and boy, what we detailed above is just skimming the surface.

I really, really want to know what happens with all those respondents we reject.  I’d like to believe they’re dropped from the panel, but then again I still leave milk and cookies for Santa.  If repeat offender respondents were getting dropped and blocked from further participation, we would gradually see improvements in quality.  I don’t see any such improvements.  Do you?

Yeah, panel respondent quality stinks.  It’s a constant fight to weed out the bad respondents, which any more can be up to half of the respondents we get.

 

It’s the Client’s Fault

Why does panel respondent quality stink?

Because we as clients keep using it without demanding better.

Why are panel companies not doing more to fix their problems?

Because they don’t need to!

For one thing, it’s astonishing how few clients build sufficient quality checks into their questionnaires and data processing.  They behave as though just eliminating the bottom 10% of the questionnaire times, along with anyone who types “kjjkjkjkjkjkjk” into an open-end, solves the quality problem.  And if the panel company does digital fingerprinting, that must eliminate all the bots and duplicates, right?  As the Twitterverse likes to write:  SMH (or “shake my head” for those who still speak English and not Text).

The bottom line is that if you’re not using every method at your disposal to identify and terminate or remove problem respondents, you’re getting a ton of bad data.  Worse, your end client is making critical decisions based on that bad data.  That’s not good for business, to put it mildly.  But it also means that a lot of clients are blithely unaware of how bad some of the data they’re using really is – so they keep fielding panel surveys and accepting as valid the data they receive.

 

Let’s Talk About Panels

For another thing, we’ve just casually come to accept the quality problems with panel.  Let’s even say only 10% of the respondents you get on a typical study need to be removed due to quality problems (and that would be Nirvana at this point).  Can you think of another industry in which it’s considered acceptable that 10% of the products you purchase are faulty?

What if every tenth burger at Wendy’s contained no patty?  Or every tenth Pepsi bottle was filled with industrial sludge?  Or every tenth call on T-Mobile connected you to a random person in Portugal?  You’d be apoplectic, demanding improvement or switching brands.  But when you ask panel companies to replace 10%, 20%, or more of their respondents for poor quality, they shrug and comply without question or apology – or change.

Companies don’t make improvements out of the goodness of their hearts – they make informed business decisions.  When clients care more about speed and cost than quality, vendors aren’t going to invest in improving quality.  They’re going to focus on getting you the product faster and cheaper.  Panel companies won’t improve things until their clients demand improvement.

Improvements also cost money.  Panel companies can’t improve when panel sample is largely considered a commodity, and clients will choose Panel A over Panel B because Panel A’s CPI is ten cents cheaper.  Ask any panel salesperson – they’ll tell you they can lose business over pennies per respondent.

 

The Cost

The cost expectation today is ludicrous.  Back when the telephone was the primary way of collecting quantitative data, we were often paying well over $10 or $20 per complete for a survey.  Now clients blanch at the thought of paying three bucks for a respondent’s answers.  Are you sure you can’t get that down to $2.75?

As a panel company client, I would gladly pay $5 per respondent rather than $2 for a sample of engaged panelists who actually take my study seriously and provide quality answers.  Of course, then my battle as a research provider would be to convince my client why the extra cost is worthwhile when they can go to my competitor and get field costs that are 60% lower.  If your research is so unimportant that you can’t spend a few thousand dollars to do it right, why are you doing it in the first place?

The constant pressure for lower costs and faster results, along with our repeated acceptance of poor quality, leads to what we have today:  a whole lot of bad data being used, and a constant, time-wasting, soul-draining fight to cull valid responses out of a morass of bogus data.

 

We Need to Demand Better

Panel companies are not evil.  They’re simply doing what every other business does:  giving the market what it will buy.  I read various calls for sample quality to improve and for panel companies to change their ways.  While some of the ideas presented are intriguing, what incentive do companies have to make these changes?

Unless we as clients start demanding better, and being willing to pay for demonstrably improved quality, no such incentive exists.  And we’ll continue to be stuck where we are today:  lamenting the poor quality of panel sample while simultaneously pressuring companies to get their costs down, tossing another 20% of respondents out of our studies, and wondering whether that number should have been 30% or 40%.

 

data quality

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Ron Sellers

Are the Fraudsters More Sophisticated Than the Researchers?
Research Methodologies

Are the Fraudsters More Sophisticated Than the Researchers?

It’s amazing what some people will do in order to make a buck-fifty. Two recent studies have brought to light how sophisticated panel fraud has become...

Still More Dirty Little Secrets of Online Panels
Research Methodologies

Still More Dirty Little Secrets of Online Panels

Nearly half of your panel data is trash. Here is how to fix it.

Can Political Polls Really Be Trusted?

Can Political Polls Really Be Trusted?

When political polls fail to predict the exact outcome of an election, maybe they’re not wrong…maybe we are.

Generalizing: The Bane of Insights

Generalizing: The Bane of Insights

I often wonder whether, in research, we spend so much time navigating the complexities of gathering the data that we neglect the all-important field o...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers