Categories
October 31, 2025
Bots aren’t the only threat to data quality. Meet the real issue: high-frequency survey takers completing 100+ surveys a day.
If your brand metrics feel off, it might be more than just a feeling
Your team diligently manages trackers, concept tests, and pre/post campaign analyses. Through them all, you’re screening out the bots. You’re eliminating the speeders. You’re rooting out the straight-liners. But something’s been creeping into your data. Something that can’t be explained by campaign performance or sampling error.
It’s not the bots.
It’s not the obvious miscreants.
It’s real people attempting over 100 surveys a day. And they’re skewing your results.
At EMI, I’ve been meticulously studying the not-so-elusive high-frequency survey takers, what I’ll herein be referring to as the 100+ Club. While they may not have strong opinions on the topic of your research, their influence on your data is anything but neutral.
Tracking respondent-level activity data across the sample ecosystem, I’ve found that these high-frequency survey takers make up around 4% of the sample ecosystem on average. However, I’ve observed an exponential increase in their incidence, with recent research indicating they sometimes comprise upwards of 10% of a given survey’s sample. And when they sneak into your study, they add a substantial degree of noise, ultimately impacting research outcomes and attendant business decisions.
Demographics
Device & Platform Bias
AI Early Adopters
The 100+ Club’s familiarity and usage of generative AI platforms is noteworthy:


These respondents aren’t just frequent survey takers; their predictable patterns can and do undermine data quality.
Across 23 different brands that are measured in EMI’s research-on-research, the 100+ Club’s average aided awareness was 18 points lower than respondents who attempted 10 or less surveys in a 24-hour period. In some brands that were measured, the gap increased to 20 points or more. If awareness is one of your key brand health metrics, it’s worth noting that this group drags it down and introduces unnecessary unease.

In your brand funnel, awareness typically feeds into consideration, which in turn leads to favorability and intent. But the 100+ Club breaks that funnel completely.
Despite their lower claimed brand awareness, the high-frequency survey takers consistently rate brands more favorably than other respondent groups.

Some notable examples:
Taken together, the findings indicate that these respondents aren’t truly evaluating your brand. They’re simply defaulting to a “positive” answer. Possibly because it’s the first response option they can click on. Possibly because they know that’s what you want to hear. Either way, the implication for brand teams is unequivocal:
Without knowing the source, brand ratings can’t be trusted as a true reflection of reality.
If your team is conducting concept testing and using purchase intent or acceptance rates to determine which ideas to advance, take note that high-frequency survey takers can artificially inflate your confidence.
In research-on-research, I’ve tested concepts that spanned everything from an eco-friendly grocery chain to high-end audio products. Across every category, the 100+ Club’s purchase intent hovered 5–10 points higher than that of lower-activity groups.

This behavior can make it very difficult to identify which concepts are truly resonating. It reduces natural variance and attenuates the signal.
So, when a concept gets the “green light,” brand teams need to ask themselves:
“Who actually ‘liked’ it…and do we trust this data?”
Even if you don’t run a panel or provide sample, you can still drive quality by asking the right questions to your sample providers, like:
Real people are inconsistent. Their brand awareness, purchase intent, and favorability can shift (often unpredictably and without obvious cause). If your data never reflects that – if every concept scores high, brand awareness is always steady, and purchase intent only climbs – it’s not a sign of perfection. It’s a warning. Take a closer look at who’s really answering before you make your next big decision.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.