Reality Check: Market Research Wasn’t Built for Such a Complex Digital Ecosystem

Data quality in market research is now a shared responsibility. Learn how prevention, transparency and collaboration can combat adaptive fraud.

Reality Check: Market Research Wasn’t Built for Such a Complex Digital Ecosystem

I was recently a guest on the Greenbook CEO Series podcast with Lenny Murphy, and like most good conversations, it didn’t end when we stopped recording. We spent a lot of time talking about data quality, fraud and the uncomfortable reality that market research is now operating inside a much more complex digital ecosystem than it was built for. The discussion stuck with me because it surfaced a bigger issue our industry still struggles to confront directly: data quality is no longer a downstream problem, and it is not something any one company can solve on its own.

I’ve spent nearly 20 years on the sample side of this industry, including time at Research Now and Lucid during periods of intense growth. I’ve seen the panel world evolve from tightly controlled, double opt-in loyalty communities to highly programmatic, exchange-driven ecosystems designed for speed and scale. That evolution brought real benefits, but it also introduced new vulnerabilities. As programmatic scale took hold, fraud evolved alongside it, becoming more coordinated and more tech-enabled.

“The reality is tech enabled fraud evolves faster than any fraud prevention tool out there. So, we need to rely on our research partners to help us close the loop on what they're seeing in the survey data that we can help take a look at and tie back to the respondent, tie back to the pre-entrance signal we have so that we can continue to build better quality assurance layers in the future.”
— Patrick Stokes, Founder & CEO, Rep Data (Greenbook CEO Series)

When Data Quality Became a Supply Chain Issue

When we talk about fraud today, it’s easy to frame it as a niche issue or a technical nuisance. In reality, it’s a supply chain problem. If the inputs are compromised, everything downstream suffers, regardless of how strong the methodology or analysis may be. And unlike earlier eras, where quality failures were often accidental, today’s bad data is frequently intentional and coordinated.

Across panels, exchanges and suppliers, billions of survey entry attempts now flow through automated systems, such as Research Defender, each year. That scale creates efficiency, but it also creates opportunity for bad actors. What we increasingly see is not just inattentive respondents or professional survey takers, but coordinated behavior, device manipulation, automation and AI-assisted attempts to pass as legitimate participants.

What doesn’t get discussed enough is how adaptive this environment has become. Fraud is not static. Each new control triggers a new workaround. That means no prevention approach, no matter how sophisticated, can operate effectively in isolation.

Why Prevention Can’t Sit in One Place Anymore

This is where I think the industry has an opportunity and frankly, a responsibility, to rethink how we approach data quality ownership.

Too often, quality gets framed as something that happens after fieldwork or something that sits entirely with suppliers. That mindset made more sense in a simpler ecosystem. It does not work anymore. Quality is co-owned across the research supply chain. Survey design choices, incentive structures, incidence targets, feasibility pressure and timelines all influence the type of traffic that shows up at the door.

In practice, we consistently see fraud risk rise when certain conditions come together:

  • Harder-to-reach or niche audiences with limited natural incidence
  • Longer surveys or designs with heavy cognitive load
  • Higher incentives that attract coordinated or professional respondents
  • Narrow quotas or market-specific studies that are easier to target and exploit

None of those research design choices is wrong on its own. But taken together, they change the risk profile of a study in ways that need to be acknowledged upfront, not discovered after the fact.

The Missing Link? Greater Ecosystem Visibility and Closing the Loop.

One of the most important things researchers can do today is close the feedback loop. Pre-entrance monitoring provides valuable signals, but it is not the full picture. When research teams flag suspicious patterns in completed data and share those findings back with their partners, it allows post-survey signals to be tied back to pre-survey behavior. That connection is what enables systems to evolve in real time rather than react months later.

We’ve seen this play out firsthand. In one market-specific study, a client noticed a sudden surge of respondents exhibiting highly specific abnormal behavior. Tracing those cases back revealed a new combination of tactics we had not previously seen. That insight led directly to updates in detection logic. Without that collaboration, those patterns would have persisted longer, quietly degrading data quality across multiple projects.

This kind of partnership is not always easy. It requires transparency, trust and a willingness to acknowledge that no one has perfect visibility. But the alternative is worse. Fragmented defenses and siloed signals create exactly the environment sophisticated fraud thrives in.

Are AI Agents the Next Layer of Complexity, or Something Else?

I’m often asked whether the rise of AI agents and digital twins will make this problem impossible to solve. My view is more nuanced. AI itself is not the enemy. The real risk is building systems or models on top of compromised human data. If the foundation is flawed, everything built on top of it inherits those flaws at scale.

That is why prevention matters more than post-hoc cleaning. Once bad data is in the system, the damage is already done. Catching issues before a respondent ever reaches a survey is essential.

How to Confidently Move Forward

As the industry continues to scale, the challenges around fraud and data quality are only getting more complex. Tactics are more sophisticated, expectations for speed and certainty continue to rise, and pressure on timelines and costs has not gone away. At the same time, I’m encouraged by what I’m seeing from researchers who are asking tougher questions, pushing for greater transparency and treating data quality as a foundational requirement rather than something to be addressed at the end of a project. Listen to my podcast with Greenbook here, or reach out to us to continue the conversation!

data qualityartificial intelligencedigital twin

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Patrick Stokes

Patrick Stokes

CEO & Founder at Rep Data

4 articles

author bio

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Patrick Stokes

How to Deal with Bots in Market Research
Research Methodologies

How to Deal with Bots in Market Research

Fraud mitigation techniques to reduce bots’ impact on data quality.

Embracing Aggregation: What Buyers Need from Suppliers Today

Even though many of us are sick to death of analyzing the pandemic and its impact, the truth is that it has changed the face of market research for go...

What Does “The Office” Have in Common with the Market Research Sample Supply Chain? 

Insights Industry News

What Does “The Office” Have in Common with the Market Research Sample Supply Chain? 


Similar to the realm of video streaming services, the MR sample space faces increased fragmentation.

ARTICLES

The Future of Market Research: Why Mixed-Method Insights Are Redefining Strategic Decision-Making
Research Methodologies

The Future of Market Research: Why Mixed-Method Insights Are Redefining Strategic Decision-Making

Explore how mixed-method market research blends qualitative depth and quantitative scale to drive smarter, future-ready business decisions.

Karen Lynch

Karen Lynch

Head of Content at Greenbook

The Customer Segmentation Graveyard: Why Great Analysis Dies in PowerPoint
Research Methodologies

The Customer Segmentation Graveyard: Why Great Analysis Dies in PowerPoint

Make segmentation actionable. A five-step approach helps teams activate segments across CRM, marketing, and sales decisions.

Apoorva Dudani

Apoorva Dudani

Senior Market Research Analyst at Keypoint Intelligence

What To Expect In 2026
Research Methodologies

What To Expect In 2026

What will insights look like in 2026? Ester Marchetti examines real-time insight, dynamic personas, ethical AI, and expanding influence.

Ester Marchetti

Ester Marchetti

Co-Founder & Chief Innovation Officer at Bolt Insight

When Listening Turns Into Noise: The Real Reason People Ignore Surveys
Research Methodologies

When Listening Turns Into Noise: The Real Reason People Ignore Surveys

Asking more can backfire. Discover how feedback overload erodes trust and data quality and what drives meaningful engagement.

Tarik Covington

Tarik Covington

Founder & Chief Strategist at Covariate. Human-Centered Insights

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers