Research Methodologies

January 7, 2022

A Researcher Who Became a Respondent (Part One): Fixing Data Quality at the Source

How can we improve the sampling process itself?

A Researcher Who Became a Respondent (Part One): Fixing Data Quality at the Source

Editor’s Note: This is the first installment of A Researcher Who Became a Respondent. View part two here, and part three here.


Automation and programmatic technologies have led to efficiencies that have in turn commoditized sample and therefore commoditized people. The systems that route panelists through surveys are designed to optimize distribution with little consideration for the user experience. Having just embarked on the panelist journey myself, I can relate first-hand to the frustrations our respondents face every day.

A few weeks ago, I saw an ad on Twitter for paid surveys. As an experienced market researcher, I jumped on the opportunity to put myself in the respondent’s shoes and eagerly signed up. What happened next has been an eye-opening experience.

Related

A Researcher Who Became a Respondent (Part Two): Mapping the Panelist Journey

The second I signed up, I got email invitations for eight different panels. I enrolled in a couple of panels and started taking surveys. Quickly, I found myself entangled in a loop of pre-screeners that would determine which surveys I could potentially qualify for.

By the time I got to a live survey, I was already exhausted from being bounced around like a ball in a pinball machine.

I was also annoyed and in a bad mood because I had to answer the same screener questions over and over (gender, age, ZIP code, and so on). Despite my goodwill and genuine love for surveys, I hardly had any time, energy, or patience left for the survey I finally qualified for.

Putting my researcher hat back on, I wondered what it meant for our own surveys. On the one hand, sampling technologies are needed to meet the strong demand for supply, and therefore going back to traditional proprietary panels seems unlikely. On the other hand, it is clear that our research practices have to evolve and adapt to this new sampling model. While there are many ways to improve data quality from design to analysis, I wanted to specifically focus this article on how we can mitigate some of the issues caused by the sampling process itself.

 

1. Short screeners can prevent survey fatigue.

Before this experiment, I had underestimated the survey fatigue caused by repeated pre-screening before panelists even get to our survey. One has to wonder how many times a normal person can go through this funnel before dropping out and what type of respondents have enough stamina to stick around. I am more mindful than ever of questionnaire length, and especially of keeping the screener short, as ours is one of many panelists will go through before they receive a reward for their time.

 

2. Fresh respondents are more attentive.

While this is arguably an assumption, it’s hard to disagree that cognitive performance is not affected by fatigue so it is helpful to know just how tired a respondent is before starting our survey. Including a self-reported measure that helps us identify respondents who have already spent too much time on surveys to be fully engaged can help us decide if we want them to proceed with the survey or if they should be removed during post-fieldwork cleaning once their answers have been reviewed. For example, we can ask, “Before you got to this survey today, how long (minutes) had you been filling out other surveys?” or more generally “How are you feeling right now (from energized to tired)”?

 

3. Sample should not be a black box.

Not all panels are created equal. By understanding the various recruiting models, how the panels are managed, how the panelists are incentivized, and how much of the supply they use is not their own, we can make informed decisions about which panels are likely to yield higher quality data.

 

4. Smaller sample sizes on some projects may make more sense.

While there may be valid statistical reasons a large sample size might be needed, that is not always the case, and large sample sizes may compromise the quality of our data. When we’re scrambling to fill quotas or looking for a needle in a haystack, lower-quality sample sources that we would otherwise avoid may end up finding their way into our dataset.

 

5. Other methodologies may be better suited to your research needs.

Online research has become the de facto methodology for quant, and for good reasons: It’s efficient and will give you the best “bang for your buck”. However, there are also scenarios where other methodologies like a custom recruit, face-to-face research, or qualitative methods will lead to better insights (e.g. hard to reach audiences, countries where online is not representative, and so forth.).

There are many ways to improve data quality, but like a good chef will tell you how important quality ingredients are in a dish, it starts at the source.

data qualityonline panelsonline surveyssurvey design

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karine Pepin

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality
Data Quality, Privacy, and Ethics

Truth To Be Told: Five Realities About Online Sample That Compromise Data Quality

Explore five key truths about sampling, uncovering fraud, low-quality respondents, and transparency issues that have eroded data quality over two deca...

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence
Research Methodologies

From Deliverables to Research Assets: How Insights Teams Can Leverage Content Design Principles for Greater Influence

Learn key principles of content design that enable researchers to distill insights into assets, fostering stakeholder influence and sustainable busine...

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem
The Prompt

The Cost of Being Wrong: How Overconfidence in Ineffective AI Detection Tools Impacts the Research Ecosystem

Discover the challenge of identifying AI-generated open-ended responses and the potential consequences for researchers and the market research industr...

Why the Sampling Ecosystem Sets Up Honest Participants for Failure
Data Science

Why the Sampling Ecosystem Sets Up Honest Participants for Failure

This article discusses how the online sampling ecosystem favors professional respondents and bad actors. It advocates for a transformative shift towar...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers