The Recontact Rate: A New Standard for Measuring Respondent Quality

In response to the Opinions for Good scandal, a new metric is proposed to better measure respondent quality and restore trust in market research data.

The Recontact Rate: A New Standard for Measuring Respondent Quality

The recent indictments of Opinions for Good and Slice MR executives for orchestrating a $10 million survey fraud scheme have shaken the market research industry. They represent one of the most serious breaches of trust our field has faced.

But this isn’t just about one scandal. It’s the latest evidence of a system under strain.

Data quality has been quietly eroding for years. Short-term sampling models, automation-driven recruiting, and gig-style participation have created an environment where fraud doesn't sneak in—it thrives. Bots, click farms, and AI-generated responses aren’t fringe risks anymore; they’re part of the daily reality.

Industry groups have stepped up their efforts, but one thing is still missing: a reliable, actionable way to measure the quality of the people behind the data.

That’s where the recontact rate comes in—a straightforward, scalable metric that could bring clarity to how we assess respondent reliability.

A New Metric: Recontact Rate Definition

I propose we adopt the recontact rate as a core metric for assessing data quality and respondent integrity.

The recontact rate is simple: it’s the percentage of respondents who can be successfully re-engaged after a defined period—say, three, six, or twelve months. This isn’t just a measure of contactability. It’s a proxy for reliability, engagement, and trustworthiness.

Figure A

In research, replication is essential. If results can’t be reproduced, they’re not credible. Likewise, if respondents disappear after a single survey, the integrity of the insight falls apart. A strong recontact rate, on the other hand, signals stability and opens the door to true replication studies—something our industry sorely needs.

The Problem with the Current Model

The evolution of sampling has brought benefits, but also fragmentation. Blended sources and river sampling now dominate the landscape. They allow for scale and cost-efficiency but strip away any meaningful connection with the respondent.

This transactional model leads to disengagement. Poor survey experiences, excessive redirects, and low incentives discourage thoughtful participation. As high-quality respondents exit, fraudsters fill the gap, emboldened by tools that allow them to mimic real behavior and slip past traditional detection methods.

While automation and API-driven sampling improve efficiency, they’ve also made it easier to commoditize the respondent. The result: short-term gains, long-term erosion of trust and data quality.

Defining Respondent Quality

We need to rethink how we define a “quality” respondent to tackle this. Three key traits matter:

  • Truthfulness: Honest, accurate responses without misrepresentation.
  • Thoughtfulness: Engagement with questions, not just checking boxes.
  • Assertiveness: Willingness to give clear, meaningful answers, not vague justifications.

Short-term metrics like attention checks or completion rates only scratch the surface. Real insight into respondent behavior comes from sustained interaction, observing how someone answers over time.

That’s why the recontact rate matters. It’s not just a data point. It’s evidence that a person is willing to show up again—and that they’re likely to give you the same quality of thought each time.

Panels shouldn’t be static, but they should be stable. Recontact efforts must be balanced with smart panel renewal. Long-term respondents bring consistency but may also develop habits or biases. Mixing new and experienced participants helps mitigate these risks.

Ultimately, establishing best practices around recontact rates can create a new baseline for evaluating respondent quality.

A Wake-Up Call

The recent indictment of Opinions for Good executives for orchestrating a $10 million survey fraud scheme is a stark reminder: the cost of neglecting quality isn’t just bad data. It’s reputational damage and lost trust across the entire ecosystem.

We can—and must—do better.

The recontact rate offers a clear, replicable, and meaningful way to assess data quality in an era that demands more than surface-level checks. As we build the future of market research, it’s time to prioritize metrics that reflect long-term engagement, not just short-term efficiency.

respondent experiencedata qualityartificial intelligence

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers