Categories
March 20, 2026
Data quality in market research is now a shared responsibility. Learn how prevention, transparency and collaboration can combat adaptive fraud.
I was recently a guest on the Greenbook CEO Series podcast with Lenny Murphy, and like most good conversations, it didn’t end when we stopped recording. We spent a lot of time talking about data quality, fraud and the uncomfortable reality that market research is now operating inside a much more complex digital ecosystem than it was built for. The discussion stuck with me because it surfaced a bigger issue our industry still struggles to confront directly: data quality is no longer a downstream problem, and it is not something any one company can solve on its own.
I’ve spent nearly 20 years on the sample side of this industry, including time at Research Now and Lucid during periods of intense growth. I’ve seen the panel world evolve from tightly controlled, double opt-in loyalty communities to highly programmatic, exchange-driven ecosystems designed for speed and scale. That evolution brought real benefits, but it also introduced new vulnerabilities. As programmatic scale took hold, fraud evolved alongside it, becoming more coordinated and more tech-enabled.
“The reality is tech enabled fraud evolves faster than any fraud prevention tool out there. So, we need to rely on our research partners to help us close the loop on what they're seeing in the survey data that we can help take a look at and tie back to the respondent, tie back to the pre-entrance signal we have so that we can continue to build better quality assurance layers in the future.”
— Patrick Stokes, Founder & CEO, Rep Data (Greenbook CEO Series)
When we talk about fraud today, it’s easy to frame it as a niche issue or a technical nuisance. In reality, it’s a supply chain problem. If the inputs are compromised, everything downstream suffers, regardless of how strong the methodology or analysis may be. And unlike earlier eras, where quality failures were often accidental, today’s bad data is frequently intentional and coordinated.
Across panels, exchanges and suppliers, billions of survey entry attempts now flow through automated systems, such as Research Defender, each year. That scale creates efficiency, but it also creates opportunity for bad actors. What we increasingly see is not just inattentive respondents or professional survey takers, but coordinated behavior, device manipulation, automation and AI-assisted attempts to pass as legitimate participants.
What doesn’t get discussed enough is how adaptive this environment has become. Fraud is not static. Each new control triggers a new workaround. That means no prevention approach, no matter how sophisticated, can operate effectively in isolation.
This is where I think the industry has an opportunity and frankly, a responsibility, to rethink how we approach data quality ownership.
Too often, quality gets framed as something that happens after fieldwork or something that sits entirely with suppliers. That mindset made more sense in a simpler ecosystem. It does not work anymore. Quality is co-owned across the research supply chain. Survey design choices, incentive structures, incidence targets, feasibility pressure and timelines all influence the type of traffic that shows up at the door.
In practice, we consistently see fraud risk rise when certain conditions come together:
None of those research design choices is wrong on its own. But taken together, they change the risk profile of a study in ways that need to be acknowledged upfront, not discovered after the fact.
One of the most important things researchers can do today is close the feedback loop. Pre-entrance monitoring provides valuable signals, but it is not the full picture. When research teams flag suspicious patterns in completed data and share those findings back with their partners, it allows post-survey signals to be tied back to pre-survey behavior. That connection is what enables systems to evolve in real time rather than react months later.
We’ve seen this play out firsthand. In one market-specific study, a client noticed a sudden surge of respondents exhibiting highly specific abnormal behavior. Tracing those cases back revealed a new combination of tactics we had not previously seen. That insight led directly to updates in detection logic. Without that collaboration, those patterns would have persisted longer, quietly degrading data quality across multiple projects.
This kind of partnership is not always easy. It requires transparency, trust and a willingness to acknowledge that no one has perfect visibility. But the alternative is worse. Fragmented defenses and siloed signals create exactly the environment sophisticated fraud thrives in.
I’m often asked whether the rise of AI agents and digital twins will make this problem impossible to solve. My view is more nuanced. AI itself is not the enemy. The real risk is building systems or models on top of compromised human data. If the foundation is flawed, everything built on top of it inherits those flaws at scale.
That is why prevention matters more than post-hoc cleaning. Once bad data is in the system, the damage is already done. Catching issues before a respondent ever reaches a survey is essential.
As the industry continues to scale, the challenges around fraud and data quality are only getting more complex. Tactics are more sophisticated, expectations for speed and certainty continue to rise, and pressure on timelines and costs has not gone away. At the same time, I’m encouraged by what I’m seeing from researchers who are asking tougher questions, pushing for greater transparency and treating data quality as a foundational requirement rather than something to be addressed at the end of a project. Listen to my podcast with Greenbook here, or reach out to us to continue the conversation!
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Patrick Stokes
Fraud mitigation techniques to reduce bots’ impact on data quality.
Even though many of us are sick to death of analyzing the pandemic and its impact, the truth is that it has changed the face of market research for go...
Similar to the realm of video streaming services, the MR sample space faces increased fragmentation.
ARTICLES
Top in Research Methodologies
Explore how mixed-method market research blends qualitative depth and quantitative scale to drive smarter, future-ready business decisions.
Make segmentation actionable. A five-step approach helps teams activate segments across CRM, marketing, and sales decisions.
What will insights look like in 2026? Ester Marchetti examines real-time insight, dynamic personas, ethical AI, and expanding influence.
Asking more can backfire. Discover how feedback overload erodes trust and data quality and what drives meaningful engagement.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.