Categories
November 15, 2024
Explore AI's impact on market research and discuss the need to examine participant experience, survey design, and data quality for better insights.
In case you missed the memo, AI is taking over market research. The Quirk’s conference earlier this summer was laden with it, as are many of the research community’s headlines. And despite the move-fast-break-everything ethos flowing through tech, I’m happy to see it’s making some of us more cautious.
Many companies are starting to scrutinize AI’s effect on data quality, respondent engagement, sample quality, security, privacy, and other common challenges. They’re painstakingly vetting any AI-driven solutions to find anything potentially damaging to their insights, or the businesses they deliver them to. Industry associations are scrambling to provide guidance, like ESOMAR’s 20 Questions for Buyers of AI-Based Services.
But in a space where skepticism and scrutiny are tools of the trade, it begs the question: What if we applied the same scrutiny we’re showing AI to other areas of research? If we’re willing to challenge AI’s sweeping generalizations and default assumptions, maybe it’s time to challenge a few of our own.
Why are we cramming 45 minutes of questions into a survey? Who are the people willing to complete such a survey, and how might that context be skewing our results? How is the way we deliver our insights informing our business’ decisions? Since we’ve got the magnifying glass out, let’s re-examine what we’re doing from start to finish. Looked at this way, AI is a catalyst for questioning our own research methodologies.
Off the top of my head, participant experience is one area ready for a change. If our industry’s treatment of research participants received the same examination we’re giving AI, it would be having a long talk with HR. If not the UN.
Our insistence on long, tedious surveys needs a complete overhaul. We’ve known this for years, and not much has changed. We know these long, dull surveys lead to data quality nightmares like dropouts, straight-lining, respondent fatigue and more.
How did this become the data we’re all relying on? Complacency, to put it bluntly. Many researchers don’t want to change the way they ask questions because they don’t want to deal with how it might influence the responses they’re receiving.
If it sounds counterintuitive, that’s because it is.
When we scratch the surface, it’s clear that the participant experience conundrum goes beyond survey length. There’s a level of respect missing, and re-centering our approach on the consumer is how we correct it.
Think of it this way: A few years ago, my family and I had a great time at a popular theme park. But when our satisfaction surveys came around, their tedium and length left us annoyed.
Dozens of questions centered around the minutiae of my food and beverage experience; presumably so they could reward or reprimand the food staff accordingly. After so many irrelevant and surface-level questions, our responses started to reflect more of our impatience with the survey, and not our overall experience at the park. By deciding their data collection was more important than our experience, they voided their results entirely.
Don’t give this anonymous theme park the side eye, though. This is the norm, and we can do better. It’s possible to design research that is both participant-friendly and insight-rich, and it starts with meeting respondents where they are. Taking a conversational, engaging, mobile-oriented approach that respects the participant’s time always results in more nuanced insights.
I’ve found that making surveys feel more like a conversation with a friend works wonders. Engagement, recontact rates and answer thoughtfulness go up exponentially when using an approach that feels more authentic, mimicking the look and feel of mobile messaging. In my work at Rival, we’ve found that almost everyone would rather have a conversation than take what feels like a test.
One shows curiosity and interest in a person’s experience, the other an exam that only the supremely pleased or catastrophically annoyed would submit to. One results in deep, true-to-life insights, the other misconstrues them by sapping out context.
A better participant experience makes people more likely to participate in the future. It reduces research costs, increases speed to insights and—crucially—offers up insights that accurately reflect each respondent’s experience.
Participant experience and engagement is just one example in a host of items that could benefit from a closer look in the insights space. As we put AI through the ringer for its shortcomings, let’s start holding our research to the same standards.
Not just because it’s the right thing to do, or because it makes insights professionals look good to their bosses. Let’s do it because it lets insight teams do the work they mean to be doing: Prompting business decisions that reflect reality.
If we’re smart, the unintended consequences of AI in market research could be us elevating the entire field.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Andrew Reid
Partner Content
“Micro-moments” are more than a trend—it is a real opportunity to create engagement with customers.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.