Categories
April 24, 2026
Synthetic research is evolving fast. Beyond the hype, what can it truly do well today — and where does it still fall short for insights teams?
By now, most insights leaders have heard the headline version of the synthetic research story. It is faster. It is cheaper. It is improving quickly. And whether they are comfortable with it or not, clients and internal stakeholders are increasingly curious about what it can do.
When I first returned to my office after this year’s Qualtrics X4 event, synthetic research was one of the themes I wanted to dive deeper into. I knew insights teams were wrestling with its place in the researchers' toolbox. It had been everywhere at the event: on stage, in product sessions, in my interviews, even in conversations during breaks.
But I also knew I did not want to publish a quick reaction piece just focused on synthetic research that as if the tension around it matched the energy of the event; it did not.
Synthetic research is one of those topics you have to sit with after learning about it. That's why I included it in a list of questions insights leaders are pondering, the first piece I wrote when I returned home.
A few weeks later, with some time to sit with my notes and our own IIEX North America now around the corner, I’m clearer on what I want to share with Greenbook's readers on the topic.
The synthetic research conversation is maturing. It is not enough to ask whether synthetic is real, whether it is improving, or whether it will affect the work of researchers and insights leaders. The answer to all of that is yes.
I keep coming back to what is true, right now. What can synthetic research actually do well right now, and what does it still struggle to do.
Those are the data points we need right now because right now practitioners are making decisions, not just sitting in awe of and reflecting on the pace of the technology. So let's get real, and current.
Part of the appeal is obvious. Traditional researchers are working in an environment where speed pressures are intensifying, internal stakeholders are less patient, and respondent fatigue is a real problem.
There is a growing mismatch between the volume of questions organizations want answered and the amount of human attention they can realistically keep asking for. Respondents are tired of completing our surveys.
Jordan Harper, Principal AI Thought Leader at Qualtrics, put that plainly: “People have got sick of answering questions on trivial stuff.”
That is part of what makes synthetic so appealing right now. It enters the conversation as a possible solution to a very real industry challenge. Synthetic offers a compelling promise: directional answers without placing the same burden on respondents ... all the while offering shorter timelines, and more room to iterate before spending more on live fieldwork.
I mean ... that's hard to argue with if you are a decision maker.
For teams figuring out where to begin, Ali Henriques, Head of Qualtrics Market Research, framed it this way when I interviewed her at X4: “There are no-risk applications. A no-risk application is pre-testing the study design with synthetic. Tell me what the harm is. Get a sense of the distributions. Does this make sense? Are there going to be any surprises?”
That's such a useful way to think about it.
The most immediate value of synthetic may not be that it replaces an existing method entirely. It may be that it gives research teams a lower-risk way to:
In that sense, synthetic research may help teams get more efficient not by eliminating work, but by helping them determine where deeper work is truly needed.
And by setting them up for success in the next phase of work.
That feels like one of the clearest wins in the synthetic story right now.
One concern that continues to surface in conversations about AI and synthetic research is whether the outputs flatten insight, smoothing over anomalies and outliers in favor of the broadest, safest pattern.
That issue came up in my conversation with Harper, too. His answer was that AI often gives you what you ask it to look for. Harper explained to me, however, “If you tell it to find general patterns, it’s going to flatten it and find the big patterns. If you tell it to find the weird stuff that doesn’t line up, it will go and find that. But you need to tell it to.”
That was so simple once I heard it. Of course!
The temptation with AI is to accept the first clean synthesis it gives you. But if the system is often finding what it is asked to find, then the burden is still on the researcher to ask for the anomalies, the contradictions, the strange signals, and the edge cases that may matter more than the average.
So the flattening is not a reason to reject synthetic. It is a reminder that research judgment still matters, perhaps more than ever.
If you think the machine has done the hardest part and it was flat … take another look, because the hardest part may still be knowing what deserves a second look.
While I’m looking forward to learning more about how teams are using synthetic respondents and synthetically derived data, I’m staying highly aware of the limitations.
Harper, who I mentioned earlier, used a simple analogy when explaining to me why human research still matters even as synthetic improves. “You still need to measure the weather today to be able to make good predictions in the future,” he said.
That is such a useful way to frame the issue. Synthetic models do not operate in a timeless vacuum. Markets change. News changes. Culture changes. Language changes.
Categories shift. People grow. Behaviors evolve.
What people mean by something this month may not be what they meant six months ago. That is especially true when the issue is emotionally charged, socially dynamic, or shaped by events unfolding in real time.
That is one of the reasons human input still matters so much. Not just for validation in the abstract, but for keeping models and assumptions tethered to the present.
There's no clearer reason why we can be confident that while the future may be increasingly synthetic-enabled, it will still require fresh human signal.
The most useful stance right now, at least from where I sit, is to adopt the mindset of a learner.
Work to understand where synthetic can help, where it struggles, and how to blend it into a broader process without pretending it is a universal answer.
Avoid the false comfort of pretending the technology is too immature to matter. It matters already.
If you aren't going to be the researcher shaping how it gets used you'll be leaving that to someone else. But someone else will definitely be doing that work.
Right now, synthetic research looks most credible when it is used as a complement:
That is not a small role. But it is also not the same as replacing human inquiry. Not yet. Hopefully not ever in the places that matter most.
That feels like the right posture to take on synthetic research at this moment: don't jump in just for the hype, don't back away in fear, but learn and experiment, with discipline.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Karen Lynch
Synthetic data is becoming core infrastructure. Explore new tools, AI agents, and the real challenge...
AI agents are reshaping work and shopping. Explore startups, OpenAI, and Salesforce racing to own ag...
Future List Honoree Haley Kiernan shares insights on navigating ambiguity, building confidence, and balancing AI with human judgment in MRX.
ARTICLES
Top in Data Science
Data isn’t neutral, measurement shapes meaning. Learn how scaling data systems can narrow human insight and what leaders must do to stay accountable.
Partner Content
Qualtrics examines how synthetic data performs against academic benchmarks, addressing trust and validation gaps in AI-driven research.
Bad data can be worse than no data. Discover how strong data quality management protects insights, resources, and business decisions.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.