Categories
August 22, 2025
Conversational survey design builds trust and boosts engagement, helping researchers move past fraud filters to uncover deeper, more authentic insights.
Data quality has always been a concern in the market research industry, but the recent Op4G and Slice indictment has thrown it into even sharper focus. Seeing allegations of fabricated survey data on such a large scale is a painful reminder of how fragile trust can be. Industry groups are rallying, renewing calls for stronger fraud prevention and tougher standards.
These efforts are vital. But if we focus only on bots and bad actors, we are missing something just as damaging: the slow erosion of quality that happens when real people lose interest. Because even after the fraud is filtered out, the biggest threat to data quality is not always fake respondents. It is real ones who are bored, fatigued, or simply not engaged.
When a respondent zones out, speeds through a survey, or just clicks buttons to get to the end, that data is still technically "clean" by many standards. But let’s be honest, it’s not data that is thoughtful. Fatigue, boredom, confusing flows, or a sense that their responses don’t matter—these are all quiet killers of good insights. And the worst part is, these problems are fixable. We just have to start caring enough to fix them.
This has been my soapbox for years, and it’s never been more relevant. Respondents who actually complete our surveys are valuable. We should be treating them like VIPs. These are the people giving us their time and attention- why would we not design every part of the experience with care?
That means doing the unsexy work: being ruthless about length, ditching the ‘kitchen sink’ approach to surveys, and thinking carefully about flow and targeting. We need to stop measuring success by LOI alone and start asking: was this pleasant to do? Was it intuitive? Did we ask people things they actually knew or cared about?
If you’re ready to rethink how your research gets done—beyond the fraud filters and post-field cleaning—here are a few places to start:
Most surveys are still written like middle school tests, not conversations. But people respond better when the experience feels human. That means rethinking tone, using language that sounds natural, and building surveys that adapt in real time. Mobile-first, chat-style formats go a long way toward making the process feel intuitive and enjoyable, and that shows up in your data. In some new research-on-research, we heard first-hand from respondents about the power of this approach: “It feels like I’m actually talking to a person…I like it and it’s not like I’m just filling out a quiz in the back of a school classroom. It’s like I’m actually having a conversation.”
There’s magic that happens when you actually get engagement right. We ran a QR code pilot with a major restaurant chain where we expected 2,500 completes over three months. We hit that number in ten days. Why? Because the experience was mobile, conversational, and easy. Nothing fancy. Just a low-friction, well-designed overall experience. That completion rate changed everything, not just in terms of sample size, but in what we were able to learn. Better data quality, not because of better fraud filters, but because people actually finished what they started.
When designing a study, set a timer once you’ve done your draft. If it takes you more than 10 minutes to get through it, chances are it’s too long. Focus on what you really need to know. Be strategic about where you use profile data so you don’t waste time asking for information you already have. And for the love of respondents everywhere, stop asking people 17 grid questions about a brand they already told you they aren’t familiar with.
Not all respondents bring the same level of depth and care to their answers, and that’s okay. What matters is building tools and processes that can detect and elevate quality when it appears. We call this a “thoughtfulness score,” and it’s something we’re starting to unpack across multiple layers. For example, when you look at the average open-ended response in a traditional survey, you’ll probably get about 10 words.
Ask the same question in a mobile-first, conversational format and you’ll get closer to 25. Add AI-driven probing (smart follow up questions) and that number jumps to 48, and with video we see an average of 78 words per response. That richness is about more than volume, it’s about nuance, context, and emotion. More thoughtful input improves the quality of downstream analysis, too, especially when using AI to identify themes. We see this as the start of a broader shift: a return to the open-end, with modern tools that finally make it scalable and meaningful.
You want respondents to feel like partners, not data sources. That means showing them their voice mattered. Sharebacks, thank-you videos, or just a quick note about what their input helped shape—these small touches build goodwill and increase the chances they’ll participate again. Especially when you’re investing in ongoing communities or panels, this kind of respect makes a big difference.
Fraud is still worth fighting. But if we’re serious about data quality, we need to stop thinking that’s where the job ends. The truth is, we lose more insights to boredom and burnout than we do to bots. It’s time we start designing with that in mind.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Jennifer Reid
Partner Content
Discover the power of insight communities to enhance your research team's impact by optimizing recruitment, engagement, and distribution strategies.
In a recent research on research, Rival Technologies examines how chat surveys impact the respondent experience.
ARTICLES
Top in Data Science
Partner Content
Qualtrics examines how synthetic data performs against academic benchmarks, addressing trust and validation gaps in AI-driven research.
Bad data can be worse than no data. Discover how strong data quality management protects insights, resources, and business decisions.
Synthetic data explained: how researchers use augmented sample to boost power, protect privacy, and move faster.
Instead of punishing bad actors, reward the good. Explore how a “FICO score for research” could revolutionize survey quality.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.