Categories
The insights function is becoming a self-learning system. Explore how AI is reshaping research from dashboards to dynamic, decision-ready models.
The insights function is not simply getting faster reports. It is becoming a living system that learns, recommends, and triggers research on its own.
In a few short years we have moved from periodic, static surveys to continuous, model‑driven decision support. To understand what that means for your team, it helps to zoom out and see the arc of change in market‑research technology.
This article offers a brief history, a clear‑eyed view of the state of the industry today, and a practical picture of what comes next. What comes next is profound. AI‑led Research Platforms (AIRPs) like Conveo are enabling company‑specific models that sit at the front of decisions, not buried in decks. Those models will be updated continually by targeted research designed to bring in the highest‑value new information and refresh stale assumptions.
Think less about dashboards and more about compasses that continuously recalibrate. Here is how we got here, where we are now, and what to build next.
Early modern market research was rooted in manual craftsmanship. In‑person interviews, mall intercepts, and mail surveys produced carefully tabulated results. Computing power lived on mainframes and the workflow was linear. Fieldwork happened, data was coded by hand, and tab books landed on executive desks months later. Even at this stage, technology played a role, but mostly as a back‑office calculator rather than a catalyst for change.
The first major technological leap came with CATI, CAPI, and CAWI. Computer‑assisted telephone interviewing standardized question routing and greatly reduced human error. Laptops and then PDAs allowed computer‑assisted personal interviewing in the field. When the web took off, computer‑assisted web interviewing enabled faster cycles and more complex logic. Research became more reliable and more repeatable because question flow and data capture were governed by software rather than memory.
In the 2000s, online panels and exchanges industrialized sampling. Instead of building bespoke samples from scratch, researchers tapped into large, pre‑profiled audiences. Programmatic routers sent respondents to the best available study in real time. This made research faster and more scalable but introduced new quality challenges, from inattentive completes to sample bias. Technology solved one bottleneck and created a new set of problems that the industry still works to mitigate.
Web analytics, clickstream, search queries, social listening, and app telemetry opened a new window into behavior. Suddenly, the industry could observe rather than ask. This catalyzed hybrid methodologies that mixed what people say with what they do. It also shifted the role of research away from being the only source of truth to being one source among many. Integration became a key skill, as did governance and privacy protections.
Around the early 2010s, topic modeling, sentiment analysis, and clustering helped structure the flood of unstructured data. Image recognition started to make concept and packaging tests faster by auto‑tagging visuals. Predictive models linked survey signals to commercial outcomes. But the workflow still felt like a set of point solutions. Analysts had to knit tools together, and findings still often ended up as decks that aged quickly.
The late 2010s into the early 2020s brought automation platforms for survey building, fielding, and basic analytics. Templates normalized question quality, API connections fed dashboards, and always‑on trackers tried to deliver continuity. Research ops emerged to manage standards, procurement, and throughput. Insight teams moved closer to product and growth squads, but many outputs remained static and backward‑looking.
Large language models changed the texture of research work. Qualitative analysis that took days now takes hours. Summaries, draft questionnaires, code frames, and storylines can be produced instantly and then refined by humans. Retrieval‑augmented generation turned unsearchable knowledge bases into queryable memory. Yet the real breakthrough is not speed for its own sake. It is the chance to make insights proactive. Instead of producing artifacts on a schedule, we can maintain models that anticipate decisions and pull in new data when confidence decays.
This arc is the backdrop for where we stand now.

If the last era was about removing friction from execution, today is about improving the signal that drives decisions. Three realities define the present state.
Organizations are awash in telemetry, survey trackers, NPS comments, social posts, CRM events, and market feeds. Yet leaders still ask simple questions. What should we build next? Which message moves the needle? Where is churn risk highest and why? The bottleneck is not access to data but the translation of that data into calibrated probabilities about outcomes, along with the actions those probabilities recommend.
Generative AI is helping compress that translation step, but it introduces new questions about reliability. Hallucinations, sampling artifacts, and training drift can produce confident but wrong narratives. The result is a new premium on model governance, evaluation against ground truth, and transparency about uncertainty.
Panel fatigue, fraudulent respondents, and poor attention are real challenges. But the most persistent quality gap comes from misaligned measurement. If a survey tries to answer a question that is fundamentally causal with a cross‑sectional instrument, no amount of data cleaning can save it. The fix is to match methods to questions: experiments for causal claims, conjoint for trade‑offs, diaries for habit formation, and in‑product telemetry for behavioral validation. Tooling helps, but design choices matter more.
AI can improve design by learning what works in your domain. It can propose sample sizes, priors, attention checks, and question variants that performed well in past studies. It can flag when a hypothesis is better served by an experiment than a survey, or when an uplift is too small to measure with the available audience. In other words, AI can act as a design partner that protects quality before data is collected.
Most companies have valuable but stranded knowledge: a segmentation that still explains behavior, a pricing elasticity curve from three years ago, a battlecard that addresses the key objection. The problem is recall and freshness. Teams cannot find the right artifact at the right time, or they cannot trust whether it still applies. Retrieval‑augmented generation linked to a governed knowledge graph is helping. The best teams tag artifacts with the customers, markets, and decisions they inform, then route queries through those artifacts before starting new work.
Still, even the best knowledge bases go stale. Markets shift, competitors change positioning, and channels evolve. The critical capability is not just recall. It is a system that knows when the world has changed enough to warrant a refresh and can commission the most efficient research to update the model that powers decisions.
Recent years have delivered advances that address stubborn, decade‑old challenges: quality concerns caused by fatigued panels; the recurring struggle to turn 10–1,000 hours of interviews into a story that resonates; the wall between qualitative texture and quantitative confidence; the trust gap when a slide claims a number that cannot be traced back to the source; the cost of tailoring insights to different stakeholders without resorting to one‑size‑fits‑all decks. A new class of platforms is moving the needle by rethinking the workflow from the ground up.
1) AI‑led video interviewing at true scale
Instead of a small number of expensive moderated interviews, AIRPs can run video interviews in parallel using an AI interviewer that asks nuanced follow‑ups. The result is breadth without sacrificing depth. You get hundreds or thousands of human voices, each probed in context, which means every study yields memorable quotes and clear patterns. Practically, that changes the narrative power of the work: you do not just say what people think; stakeholders can hear it in their own words.
2) Traceability that builds trust
Every figure in an output can be traced back to the underlying clips. If a slide shows a percentage, you can click into the moments where participants said the things that number represents. If a theme is highlighted, the supporting clips are right there, timestamped and searchable. This turns insights into a high‑trust asset. Executives are not asked to accept claims; they can audit, listen, and feel the reality behind the number.
3) Qual depth with quant confidence
By running standardized probes at scale and coding the outputs with consistent schemas, AIRPs make it possible to quantify the prevalence of themes without losing their human texture. Strategy teams get the confidence to act, backed by percentages that are alive with voices and scenes. Brand teams get clips they can use in internal storytelling. Product teams get specific language that customers actually use.
4) Lower‑cost tailoring for multiple audiences
One study rarely serves everyone. Sales wants proof points. Product wants friction detail. Finance wants a quantified business case. Traditionally, tailoring meant multiple decks and many hours. Modern platforms assemble audience‑specific views from the same interview base. A stakeholder can jump from a topline storyline into clips that matter for their decision, then out again to the data roll‑up.
5) Built‑in quality and governance
Because interviewing is standardized by the AI assistant and quality checks are embedded early, studies are less vulnerable to inconsistent moderation. Consent management, provenance tags, and access controls are part of the workflow, not an afterthought. Teams can share with confidence that the right people can see the right evidence with the right permissions.
6) Faster analysis without shortcuts
Automated first‑pass synthesis accelerates coding and theme identification, but humans remain in the loop to refine findings and pressure‑test the narrative. Analysts spend less time wrangling and more time thinking. The net effect is speed with rigor, not speed instead of rigor.
The industry has plenty of tools that claim speed. The difference today is trust and traceability at scale. That combination addresses the credibility gap that has long undermined research influence. When leaders can click from a headline number to the face, voice, and words behind it, they lean in. When an AI interviewer can run hundreds of thoughtful conversations in parallel, teams can finally get both depth and breadth without blowing timelines or budgets.
Once you experience traceable, at‑scale, human‑centered evidence, the natural next question is how to keep it fresh and front‑loaded in decisions. That is where the future points. Company‑specific models will sit at the front of choices that teams make every week. They will draw on living libraries within AIRPs, and they will know when to commission new research to refresh assumptions. Some companies will build and steward those models themselves. Most will rely on partners to create, maintain, and update them, with freshness managed as a first‑class feature. With that direction set, we can now look at the road ahead.
The next wave is not a single tool. It is an operating model. A company‑specific model becomes the interface to decisions, and research becomes the engine that keeps that model honest and current.
Start with the decisions your teams make repeatedly. Which message to lead with this quarter? Which feature to fast‑track for the next release? Which retention play to trigger for a given account? For each decision, define what a good answer looks like, what signals matter most, and what uncertainty is acceptable. That framing becomes the backbone of a model that can return a recommendation with a confidence score.
Assumptions age. Competitors move. Channels wear out. In the new model, freshness is not checked sporadically. It is monitored as part of the system. When drift is detected or confidence drops below a threshold, the model asks for help. Sometimes that help is a quick experiment. Sometimes it is a targeted qualitative sprint with a niche audience. Sometimes it is a message‑comprehension test or a pricing trade‑off. The key is that research is triggered for information value, not for tradition.
Research calendars shift from fixed trackers and large omnibus projects toward a portfolio of small, high‑leverage studies that reduce the biggest uncertainties. Budgets follow value of information, not habit. This does not mean long‑form work disappears. It means the mix tilts toward the smallest effective study that moves confidence where it matters most.
Models provide structure and speed, but they do not replace judgment. Teams carry context the model cannot see, such as regulatory nuance or strategic partnerships. Overrides are not errors; they are learning signals. When teams disagree with the model, that disagreement becomes a prompt to investigate and, if needed, update assumptions.
Some companies will build and maintain these models in‑house. They have the data infrastructure, the talent, and the appetite to own the full stack. Many companies will rely on vendors. That is not a compromise; it is a smart use of leverage. A partner can bring a proven interviewing engine, embedded freshness checks, consent and governance flows, and a service layer that configures models to your business. The result is speed to value with lower upkeep.
Modern platforms already act like the living memory that decision models need. They capture voices at scale, structure them into reliable signals, and keep provenance one click away. In a future where a model is answering a pricing or messaging question, the AIRP is the place that model reaches for when it needs human words and moments to either ground a recommendation or challenge it. When the model flags staleness, the platform can spin up new interviewing quickly, probe precisely where the gap is, and deliver fresh clips and measures that plug straight back into the decision flow. In other words, the AIRP is both evidence and engine.
You do not need a massive transformation to get started. You can move in weeks by doing three simple things and using AIRP‑style workflows to power them.
Choose a decision your team debates often. Use AI‑led video interviews to gather hundreds of voices in days. Present the recommendation with direct links to the most illuminating clips. Watch how debate shifts from belief to evidence. This builds appetite for a model‑centric way of working because people experience what high‑trust insight feels like.
For the same decision, agree on what would make you uneasy about reusing the current answer. Is it a competitor price change? A message that starts to underperform in a paid channel? A shift in a key audience? Document those triggers in plain language, and when one appears, run a small, targeted study rather than a full reset.
Take the decision you just supported and wrap a simple model around it. Even a minimal version that returns an answer and a confidence score will focus the discussion. Add a second decision the next quarter. Resist the urge to automate everything at once. Success here is cumulative.
If you lead an insights function, the skill mix you nurture should reflect this shift.
Org‑wise, many companies will keep a small stewardship group inside Insights that owns the decision models and partners with vendors. Researchers stay close to customers and continue to conduct generative work, but they now feed a living system rather than a static library.
A product manager asks which of two onboarding flows will create more 7‑day activations within a new segment. The company‑specific model returns a recommendation with a 72 percent confidence score. It highlights three assumptions that matter most and shows that message comprehension for this segment was last updated nine months ago. With one click, a targeted set of AI‑led interviews launches through your platform partner. Twenty‑four hours later, fresh clips and comprehension scores arrive. The model updates its view, confidence rises to 80 percent, and the team proceeds with clear guardrails. No endless meetings, no slide archaeology, no hand‑waving. Just evidence that you can see and hear, linked to a recommendation you can act on.
Later that week, a regional sales leader asks for proof points on a new value message. Instead of building a custom deck, you share a short story page that links to four crisp clips, each one a customer stating the value in their own words. The sales leader uses those clips in training the next day. Meanwhile, the brand team uses the same evidence base to cut a thirty‑second internal film that rallies the company around the new narrative. One study, many audiences, zero duplicate work.
This is not a fantasy. It is the practical consequence of today’s advances, applied with intention.
By the end of those 90 days you will have a credible example inside your business that shows what the future feels like. Momentum comes from seeing that example, not from a slide that describes it.
The history of our field is a march toward timeliness, scale, and integration. The present is a welcome correction to long‑standing pain points, with modern AIRPs making qualitative depth and quantitative confidence work together, at speed, with trust. The future is not generic systems that summarize the internet. It is company‑specific models that encode your knowledge, speak your language, and sit at the front of decisions. Those models will stay sharp because research is baked in. When confidence dips, new evidence arrives. When assumptions age, they are refreshed. Some companies will build and maintain those models themselves. Most will rely on vendors to create, maintain, and update them, with freshness and consent handled from the start.
If you lead insights, orient your plan around three moves: put traceable evidence in the room so trust stops being a barrier; make freshness a habit that triggers small studies automatically; and wrap at least one recurring decision with a model that returns an answer and a confidence score. Do that, and you will feel the shift from publishing decks about the past to steering the future with a living, thoughtful system.
The takeaway is simple. AI is not only accelerating research. It is reshaping the entire workflow so that evidence is vivid, confidence is explicit, and updates are automatic. Platforms such as Conveo already deliver much of that experience today. The next step is to let those capabilities sit at the front of every decision you make, then let them learn and refresh as the world changes. That is where we are going, and it is closer than it looks.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
More from Florian Hendrickx
Partner Content
Quant’s appeal was speed and cost, not always significance. Discover why using large samples by default can lead to wasted insights.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.
FH
Florian Hendrickx
October 14, 2025
Interesting article, very thorough