Greenbook Future List Spotlight: Alfred Wahlforss

Discover how 2025 Future List Honoree Alfred Wahlforss blends film, empathy, and curiosity to redefine storytelling in market research.

Greenbook Future List Spotlight: Alfred Wahlforss

Editor’s Note: The following interview features a 2025 Greenbook Future List honoree, Alfred Wahlforss. The Greenbook Future List recognizes leadership, professional growth, personal integrity, passion, and excellence in the next generation of consumer insights and marketing professionals within the first 10 years of their careers.


As a 2025 Future List Honoree, Alfred Wahlforss, Co-Founder at Listen Labs, brings a unique blend of curiosity, empathy, and creative thinking to his work in market research. A passionate film lover, Alfred sees a strong connection between storytelling on screen and understanding human behavior. “Behind every data point, there's a real person,” he says—an insight shaped by films like Cinema Paradiso, which remind him of the power of shared experiences and the importance of human connection in research.

Rather than pointing to a single career role model, Alfred finds inspiration in the collective energy of the research community. From seasoned experts to new voices pushing boundaries, he’s driven by a shared curiosity to understand the “why” behind behavior.

 

Outside of insights, what are your passions and interests?

I'm a huge film buff – I really love how movies can transport you and make you think in completely new ways. It is like reading a great book. I'm drawn to all sorts of films, but the ones that really stick with me are those that explore the complexities of human connection and how we relate to each other. There's something about seeing a story unfold on screen that helps me understand people a little better.

And in a weird way, that connects to my work in market research. We're always trying to understand people's motivations, their experiences, the stories they're living. I recently watched Cinema Paradiso again – it's such a beautiful, nostalgic film about the power of shared experiences and memories, and how those moments shape who we are. It got me thinking about how shared cultural experiences also impact consumer behavior, and the importance of finding those common threads in research. Films help me remember that behind every data point, there's a real person with a story to tell. It is important to not forget that, and watching films help.

Who is your career role model or source of inspiration?

I genuinely don't have one single person I'd call a 'career role model,' but I'm constantly inspired by the collective wisdom and passion of the market research community. I really mean that. From the moment I started in this field, I've been struck by how collaborative and open everyone is. I am inspired by, seasoned researchers with decades of experience who are willing to share their insights and mentor the next generation.

I am inspired by the newer folks who are bringing fresh perspectives and challenging traditional methods. I am inspired by the people who are constantly experimenting with new technologies and pushing the boundaries of what's possible. The best researchers I've met are like really good detectives – they have this insatiable curiosity and a drive to uncover the 'why' behind people's behaviors. That dedication to understanding human behavior, that's what I find truly inspiring, and it's something I try to bring to my own work every day. I am learning from you all.

When did you know you wanted to enter a career in insights, and what inspired you?

The real story started with frustration. At my last startup Bemlo, we did healthcare staffing - trying to match nurses with hospitals. We kept running interviews with nurses but could never do enough of them to really understand what they liked or didn't. Five interviews tell you something, but are those five nurses representative? No one knew.

Then at Harvard, while doing my graduate degree in Data Science, I stumbled into this research project. We were experimenting with using AI to moderate qualitative interviews. Most people thought AI was just for analyzing data, but we were trying something different - could AI actually have meaningful conversations with people?

I remember the first time we got it working. The AI was having this incredibly natural conversation with someone, asking follow-up questions, going deeper on interesting points - all the things good researchers do. That was the lightbulb moment. Not just "hey this is cool tech" but "wait, this could solve that problem I kept hitting at Bemlo."

The exciting part wasn't that AI could replace researchers - it was that it could help them do more of what they're best at. Instead of spending hours handling logistics and transcription, they could focus on finding insights. Instead of being limited to a handful of interviews, they could have hundreds of deep conversations and spot patterns they'd never see otherwise.

That's what drove me to start Listen. Not just building another AI tool, but solving this fundamental problem in qualitative research - how do you get deep insights at scale? The Harvard project showed me it was possible. Building Listen has been about making it real.

What is the most valuable lesson you have learned from market research?

The most valuable lesson came from a mistake. When we first built Listen's AI interviewer, we thought having smart technology was enough. We had this advanced language model, all these clever algorithms. The AI could have conversations that seemed good on the surface. But something was missing.

Then we started working with experienced researchers, and it hit me. They'd listen to these AI conversations and point out all these moments - "here, you missed a chance to dig deeper" or "see how the person hesitated? That's where the real insight was." At first it was frustrating. The conversations were technically good, so why weren't they good enough?

This gets at something deeper about research. It's easy to think it's just about asking questions and getting answers. But great researchers aren't just collecting responses - they're building a space where people feel comfortable sharing their real thoughts, their uncertainties, their contradictions. That's where the valuable insights come from.

It's like that saying about the map not being the territory. You can have the perfect research methodology, the most advanced AI, but if you're not tuned into the human elements - the hesitations, the subtle shifts in tone, the things people almost say but don't - you're missing the most important data.

This transformed how we thought about Listen. Instead of trying to make AI that could just ask smart questions, we started focusing on making AI that could create those spaces for real conversation. The technology matters, but it's just a tool. The real skill in research is knowing how to listen - not just to what people say, but to what they mean.

Understanding this changed everything about how we build our product. And it taught me something about innovation too - sometimes the biggest insights don't come from the technology itself, but from really understanding the human expertise you're trying to augment.

What do you think the key characteristics or qualities of a leader are? How does this play into MRX?

I've been thinking about this a lot since starting Listen. We work with research leaders at companies like Google and Microsoft, and it's fascinating to see how they approach insight work at scale.

The most important quality I've found is being comfortable with uncertainty. When we started Listen, we had this vision of AI-powered qualitative research, but the experienced researchers we worked with taught me something crucial - great research leadership isn't about having all the answers. It's about asking the right questions and creating space for unexpected insights.

This connects to the second quality: genuine curiosity. One research director we work with always says "if you're not surprised by at least 30% of your findings, you're probably not looking hard enough." The best research leaders I've met aren't just running studies to confirm what they already think - they're constantly exploring, questioning assumptions, digging deeper into surprising findings.

But here's the interesting tension - you need to balance this uncertainty and questioning with decisive action. When you're dealing with qualitative insights, you rarely have perfect information. One of our clients, a head of research at a major tech company, puts it well: "Our job isn't to have complete certainty. It's to be confident enough in our insights to help the company make better decisions."

The final piece is what I call "humble expertise" - being good enough at something to know how much you don't know. The best research leaders I've worked with are often the first to say "I'm not sure about this - let's dig deeper." They know research is a craft that you never fully master. There's always more to learn, especially now with new technologies changing what's possible.

These qualities compound in interesting ways. When a research leader can sit with uncertainty, stay curious, act decisively when needed, and remain humble about their knowledge, they create spaces where real insights can emerge. Whether you're running a small research team or leading insights for a global company, that's what makes the difference between surface-level findings and true understanding.

I'm still learning these lessons myself. Every time I talk with experienced researchers, I'm reminded how much depth there is to this field. The technology we're building at Listen is just a tool - it's these leadership qualities that turn data into real insights.

How do you advocate for others on your team or your customers?

This question gets at something I think about daily. For our team, advocacy starts with a simple truth - AI should help researchers do more of what they love, not replace their expertise. When we first started Listen, every product decision came back to this: "Will this help researchers spend more time on deep insight work and less time on logistics?"

Here's a real example. We had investors pushing us to make the AI interviewer completely autonomous. Made sense from a pure tech view - fully automated, more scalable. But our research customers kept telling us they wanted to stay involved in crafting the interview approach, even if the AI handled the actual conversations. Some people saw this as researchers being resistant to change. But listening closer, we realized they were right - their expertise in designing research questions was crucial for getting meaningful insights.

So we rebuilt our product to make researchers central to the process. They shape the research strategy and interview structure, while the AI handles the time-consuming parts like conducting hundreds of interviews and initial analysis. When we showed this to researchers, one of them said "Finally, someone gets that we want to be researchers, not project managers."

For our customers, advocacy means being really honest about what AI can and can't do. Sometimes that means telling a potential customer that their project isn't right for our platform. Sure, that costs us some deals, but it builds trust. We had one research director tell us they chose Listen specifically because we were upfront about our limitations.

Internally, advocating for our team means creating space for deep work. When our engineers or AI researchers need time to solve complex problems, we protect that time. Same for our research team - when they're deep in understanding customer needs, they get the space they need without constant interruptions.

It's about seeing advocacy not as something extra we do, but as core to how we work. Whether it's standing up for researchers' expertise, being honest with customers, or protecting our team's ability to do their best work, it all comes back to the same idea - helping people do more of what they're best at.

What trends do you see making headway on the horizon of insights? What trends do you think will falter in years to come?

Let's be real about AI in research, since that's what everyone's talking about. Coming from the AI side, I see something interesting happening. The tools getting traction aren't the ones trying to automate everything - they're the ones that help researchers work better.

I think we'll see more AI tools that actually understand qualitative research principles. Not just tech that can process text, but systems that get why a researcher might spend 10 minutes building rapport before asking the hard questions. At Listen, that's been our biggest learning - AI needs to understand the craft of research, not just the mechanics.

But here's what I think will falter: the idea that you can just feed all your data into a large language model and get meaningful insights. We're already seeing this fail. These models can summarize data, sure, but they miss the nuance that makes qual research valuable. It's like trying to understand a movie by reading its plot summary - you miss all the important details.

What's really exciting is the trend toward hybrid approaches. Imagine being able to have hundreds of deep, meaningful conversations with users, but still keeping the human researcher at the center of making sense of it all. That's where I see things heading - not replacing researchers, but giving them superpowers.

One trend that might surprise people: I think we'll see a return to focusing on research fundamentals. As the tools get more powerful, the difference between good and great research won't be about who has the fanciest tech. It'll be about who knows how to ask the right questions and interpret the answers thoughtfully.

What will probably fade? The push for instant insights. Some companies are promising real-time, automated research insights. But good research takes time - not necessarily weeks or months like before, but you can't shortcut the thinking part. Understanding human behavior is complex, and I don't see that changing.

And here's a controversial take: I think some of the current obsession with quantifying everything in research might fade. As we get better at processing qualitative data at scale, we might find that rich, nuanced understanding is more valuable than trying to turn everything into a number.

These aren't just guesses - we're seeing these patterns play out with our clients at Listen. The ones getting the most value aren't the ones trying to fully automate their research. They're the ones using AI to have more conversations, dig deeper, and spend more time on making sense of what they learn.

If you could change one thing about insights, what would it be?

I'd change this idea that you have to choose between depth and scale in qualitative research. It's interesting - this problem hit me first during that Harvard project, and it's stuck with me ever since.

Think about it: most researchers face this impossible choice. Either you do a few really deep interviews and hope those people represent your whole audience, or you do a broad survey that misses all the rich, human detail. It's like trying to understand a city by either having coffee with three people or looking at satellite photos. Neither gives you the full picture.

This drives me nuts because it's not just about getting more data. I've seen researchers who know exactly what questions they want to ask, exactly how they'd explore someone's response - but they can't do it at scale because there aren't enough hours in the day. It's why some of them almost got emotional when they first used Listen - suddenly they could have those deep conversations with hundreds of people instead of just a handful.

But here's the thing that really needs to change: the assumption that scaling research means making it shallower. At Harvard, working on that AI interviewer project, I realized we could actually go the other way - use technology to scale the depth, not reduce it. Have hundreds of genuine conversations where you really explore what people think and feel, not just what they say on the surface.

This isn't just about AI or technology. It's about changing how we think about research itself. What if we didn't have to make that trade-off between depth and scale? What if researchers could spend most of their time on the hard, interesting parts of their job - understanding people and finding insights - instead of managing logistics and transcripts?

That's the one thing I'd change - this false choice between deep understanding and broad representation. Because when you solve that, you unlock something really powerful: the ability to truly understand people at scale.

storytellingartificial intelligencequalitative researchdata science

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Karen Lynch

100 Episodes of The Exchange: The Conversations Shaping Market Research
The Prompt

100 Episodes of The Exchange: The Conversations Shaping Market Research

AI highlights from 100 episodes of The Exchange with Lenny Murphy & Karen Lynch—unpacking trends shaping insights, data, and analytics.

 Why Small AI Models Are Beating Big Tech Giants
The Prompt

Why Small AI Models Are Beating Big Tech Giants

Databricks’ $100B valuation marks a data economy shift. Learn how AI speed, partnerships, and specia...

AI's Evolution from Hype to Practical Impact
The Prompt

AI's Evolution from Hype to Practical Impact

AI has moved from experiments to strategy, reshaping industries from healthcare to retail. Businesse...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers