Data Science

August 22, 2025

Losing Insights to Boredom, Not Bots: Poor Respondent Experiences Are Killing Data Quality

Conversational survey design builds trust and boosts engagement, helping researchers move past fraud filters to uncover deeper, more authentic insights.

Losing Insights to Boredom, Not Bots: Poor Respondent Experiences Are Killing Data Quality

Data quality has always been a concern in the market research industry, but the recent Op4G and Slice indictment has thrown it into even sharper focus. Seeing allegations of fabricated survey data on such a large scale is a painful reminder of how fragile trust can be. Industry groups are rallying, renewing calls for stronger fraud prevention and tougher standards.

These efforts are vital. But if we focus only on bots and bad actors, we are missing something just as damaging: the slow erosion of quality that happens when real people lose interest. Because even after the fraud is filtered out, the biggest threat to data quality is not always fake respondents. It is real ones who are bored, fatigued, or simply not engaged.

When a respondent zones out, speeds through a survey, or just clicks buttons to get to the end, that data is still technically "clean" by many standards. But let’s be honest, it’s not data that is thoughtful. Fatigue, boredom, confusing flows, or a sense that their responses don’t matter—these are all quiet killers of good insights. And the worst part is, these problems are fixable. We just have to start caring enough to fix them.

This has been my soapbox for years, and it’s never been more relevant. Respondents who actually complete our surveys are valuable. We should be treating them like VIPs. These are the people giving us their time and attention- why would we not design every part of the experience with care?

That means doing the unsexy work: being ruthless about length, ditching the ‘kitchen sink’ approach to surveys, and thinking carefully about flow and targeting. We need to stop measuring success by LOI alone and start asking: was this pleasant to do? Was it intuitive? Did we ask people things they actually knew or cared about?

Strategies to Boost Engagement and Protect Data Quality

If you’re ready to rethink how your research gets done—beyond the fraud filters and post-field cleaning—here are a few places to start:

1. Make it conversational.

Most surveys are still written like middle school tests, not conversations. But people respond better when the experience feels human. That means rethinking tone, using language that sounds natural, and building surveys that adapt in real time. Mobile-first, chat-style formats go a long way toward making the process feel intuitive and enjoyable, and that shows up in your data. In some new research-on-research, we heard first-hand from respondents about the power of this approach: “It feels like I’m actually talking to a person…I like it and it’s not like I’m just filling out a quiz in the back of a school classroom. It’s like I’m actually having a conversation.”

There’s magic that happens when you actually get engagement right. We ran a QR code pilot with a major restaurant chain where we expected 2,500 completes over three months. We hit that number in ten days. Why? Because the experience was mobile, conversational, and easy. Nothing fancy. Just a low-friction, well-designed overall experience. That completion rate changed everything, not just in terms of sample size, but in what we were able to learn. Better data quality, not because of better fraud filters, but because people actually finished what they started.

2. Respect their time.

When designing a study, set a timer once you’ve done your draft. If it takes you more than 10 minutes to get through it, chances are it’s too long. Focus on what you really need to know. Be strategic about where you use profile data so you don’t waste time asking for information you already have. And for the love of respondents everywhere, stop asking people 17 grid questions about a brand they already told you they aren’t familiar with.

3. Get thoughtful about thoughtfulness.

Not all respondents bring the same level of depth and care to their answers, and that’s okay. What matters is building tools and processes that can detect and elevate quality when it appears. We call this a “thoughtfulness score,” and it’s something we’re starting to unpack across multiple layers. For example, when you look at the average open-ended response in a traditional survey, you’ll probably get about 10 words.

Ask the same question in a mobile-first, conversational format and you’ll get closer to 25. Add AI-driven probing (smart follow up questions) and that number jumps to 48, and with video we see an average of 78 words per response. That richness is about more than volume, it’s about nuance, context, and emotion. More thoughtful input improves the quality of downstream analysis, too, especially when using AI to identify themes. We see this as the start of a broader shift: a return to the open-end, with modern tools that finally make it scalable and meaningful.

4. Close the loop.

You want respondents to feel like partners, not data sources. That means showing them their voice mattered. Sharebacks, thank-you videos, or just a quick note about what their input helped shape—these small touches build goodwill and increase the chances they’ll participate again. Especially when you’re investing in ongoing communities or panels, this kind of respect makes a big difference.

Fraud is still worth fighting. But if we’re serious about data quality, we need to stop thinking that’s where the job ends. The truth is, we lose more insights to boredom and burnout than we do to bots. It’s time we start designing with that in mind.

data qualityrespondent experiencesurvey design

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Beatriz Tejedor

Beatriz Tejedor

August 26, 2025

Hi Jennifer, Great article! In my day-to-day work, I often struggle with clients to help them understand that we need to change the paradigm if we want to get valuable answers. Many don’t realize that society has changed and that our attention spans have decreased. What concerns me most recently is teaching them that Quality Control Questions are not the solution—panelists often feel like they are taking an exam, as if we expected people to act like robots. Clients sometimes overlook that someone can get distracted at times and still provide meaningful insights. Articles like this are very helpful in driving change.

FK

Frank Kelly

August 26, 2025

Hello Jennifer, all good points, but I especially like #1, to make it conversational. I think that is what most people get wrong. I have been testing adjusting the prompts on the AI probing to infuse humor or to adapt to the preferred style of the interviewee and I am seeing that increases the engagement even further. I see a day where the AI prompts are customized at an individual level based on conversational style, interests and profile data. data.

MM

Marc McDonough

August 26, 2025

One quick thought. Each and every time a client approves and purchases a completed survey for 75 cents via an aggregator, they do so willingly knowing that the end respondent will not ever truly receive an incentive. How could they? The aggregator gets 75 cents, they pay their supplier what, 40 cents for the complete, who then pays the respondent what.....10 cents? In most cases incentive accounts cannot be redeemed until they hit $20 or more. Are we then saying that the eventual $20 reward is paid out after the respondent takes 200 surveys?? This is a very important point of data to understand, as it's very difficult to expect quality data when the end user of the survey will never see a nickel for their efforts...or in this example the dime they were promised. Pay a fair price for a survey completion, make it interesting as noted in this article, and you get the beginning of a quality complete. Simple I know, but it's reality when racing to the bottom of the CPI scale. Thank you for the thoughtful article and for considering my additional context that we all know, but tend to ignore when looking at the bottom line and return on investment. Best, Marc

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Jennifer Reid

Reimagining Insight Communities: 3 Strategies to Maximize Engagement and Research ROI
Research Methodologies

Partner Content

Reimagining Insight Communities: 3 Strategies to Maximize Engagement and Research ROI

Discover the power of insight communities to enhance your research team's impact by optimizing recruitment, engagement, and distribution strategies.

New Research: How Chat Surveys Sent via Messaging Platforms Affect the Respondent Experience

New Research: How Chat Surveys Sent via Messaging Platforms Affect the Respondent Experience

In a recent research on research, Rival Technologies examines how chat surveys impact the respondent experience.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers