AI is an Opportunity. The Problem is IA—Insight Atrophy

The danger isn’t AI, but losing insight skills. Reclaim storytelling, imagination, and intuition to keep human insight at the center.

AI is an Opportunity. The Problem is IA—Insight Atrophy

This is a moment of crisis for marketing insights professionals. Budgets are shrinking, headcounts are thinning, and our collective influence on strategic decision-making is fading.

It is tempting to point the finger at artificial intelligence, but that would be unfair. The crisis stems not from AI, but from IA: Insight Atrophy.

A recent Cannes Lions survey of 1,000 marketers and creatives found more than half rated their ability to develop high-quality insights as “poor” or “very poor.” That didn’t start with AI.

AI should be a spur to our flank, inspiring us to reframe our role, not as mere collectors of information but as interpreters of meaning and contributors to the bottom line.

Our Current Reality

The term “insight” suggests surprise, wonder, or the resolution of an important tension, all of which can inspire deeper thinking. But, as the Cannes survey implies, much of what passes for consumer insight is confirmatory, narrow, or shallow. AI can replicate that efficiently and at scale.

We are capable of better, but the insights function in most organizations is under pressure to move fast and spend less, which pinches our thinking. We end up with less creativity, imagination, and experimentation—less use of the skills that separate human from machine.

AI didn’t cause IA; it just revealed how extensively it had spread. To remain relevant we must embrace our uniquely human superpower—our ability to read between the lines.

Rethinking the Research Process

Gathering the data. Think about someone who loves you, really loves you.

How do you know they love you? Is it how often they say “I love you”? Is it how much money they spend on you? Is it how much time they spend with you?

It’s all of those things, yet none of them. It’s a thousand intangibles: an inside joke, a touch, a look. You can’t prove love, but you can feel it.

This ability to listen for what we can’t hear and bear witness to what we can’t see separates humans from AI. Surveys, focus groups, and traditional IDIs aren’t built for that sort of introspection. Increasingly, AI will handle those forms of research; it can read a list of pre-determined questions and gather basic facts just as well as we can.

However, AI cannot replace genuine human-to-human conversation where interviewers listen between the cracks of the dialogue. A good interviewer scans a conversation for subtle signals that you won’t find in a transcript. A respondent who mutters, “That was a good ad,” likely means something different than the next person who points at the screen and exclaims, “THAT was a GOOD ad!”

Text-based AI interview platforms miss these nuances entirely. Furthermore, in writing people are reflective and deliberative, and tend to censor thoughts that may seem weird or too revealing, which is exactly what we don’t want respondents to do.

Even AI moderators who invite respondents to answer questions verbally may struggle to follow up appropriately when body language, facial expressions, and word choice suggest there is more than meets the eye.

In a recent food and beverage study, for example, one consumer told us, “You have to be sure you are eating healthy these days.” Although we expected to hear about healthy eating, the phrase “these days” sounded like a rumble inside the mountain. A simple, “What do you mean by ‘these days?’” ignited a pyroclastic blast of cynicism about reckless corporations, feckless regulators, rapacious health insurers, and a naïve public lured to early graves by “big food.”

We didn’t expect to hear this, but soon we were hearing echoes of it everywhere in innocuous sounding phrases—“You have to be mindful,” “Not everyone pays attention,” and “It’s important to eat real food”—that were pregnant with meaning. And it makes sense, given the heightened suspicions about vaccines, fluoride in water, and processed food that we hear about in the news.

Traditional consumer research, with its checklist of questions and strict time constraints, would have skimmed over these nuggets. AI likely would not have “known” to follow-up because it is not a human living in our culture and lacks intuition. It might have followed up if it had been trained to, but you can’t train AI to follow up on a topic you don’t expect to encounter. In this case, on-the-fly human intuition led us to a unique, differentiating insight that proved an antidote to IA.

Catching an insight is only part of the job. The next phase is putting it in context.

Analyzing the data. AI already has helped unlock significant breakthroughs in the hard sciences, where cells, enzymes, and cold fronts behave systematically and predictably.

Humans, though, are a mess. Our thoughts, feelings, and behaviors can be influenced by anything, from the color of the walls to what our parents were like.

Research that reports what people think, which is fairly straightforward, is at risk of being replaced by AI. But research that uncovers why still requires a human touch because AI doesn’t see the world as we do.

Theoretically, an LLM is trained to think like a human. But its flaw is that it has been trained by humans. We don’t understand how our own thinking works because the mechanisms are largely unconscious. In the end, an LLM thinks the way we think we think, which isn’t how we think at all.

Take metaphoric language, which is often a key that unlocks unconscious motivations. We can train AI to identify and organize metaphors, but it struggles to interpret their significance and what they say about the meaning of an experience.

In a study about rosacea, a skin disorder that causes facial flushing, focus groups and quantitative research had suggested that the condition was no big deal. However, when we explored rosacea on a more metaphorical level, we heard this:

  • “It consumes your whole face.”
  • “It’s agonizing to look in a mirror.”
  • “It is emotionally scarring.”
  • “It engulfs you.”
  • “It destroys your self-image.”
  • “No one knows what sparked it.”

An LLM could identify themes like shame, trauma, powerlessness, chaos, and alienation in the language above. But expecting it to prioritize these particular metaphors above the thousands that arose in more than 40 hours of interviews, weave them into a meaningful story, and extract their strategic value is, for now, asking too much.

In this case, the story the metaphors told is that rosacea patients share the psychology of burn victims, an unforeseen insight that inspired communication for the client’s new treatment.

Even when we uncover a powerful insight, the work isn’t done. Clients must move from insight to action.

Applying the data. Like a Bob Ross painting, the innovation landscape is sprinkled with happy accidents. Velcro was inspired by burrs on a dog’s fur. The printing press by the wine press. The Slinky by a spring that tumbled off a shelf.

AI struggles with forward-looking, idiosyncratic decisions that require reasoning and some degree of experimentation or intervention. Breakthrough innovations, new business strategies, and fresh brand positioning all require those kinds of decisions.

Put another way, all new ideas start with someone saying, “What the heck?”

  • What the heck just happened?
  • What the heck could happen?
  • What the heck were they thinking?
  • What the heck, let’s try it!

It’s a mindset of wonder. AI is a doer, not a wonderer. It does things right; it doesn’t have constructive accidents.

That makes it exceptional at solving problems and puzzles and processing vast data sets. But it can’t peek between the lines and wonder, “What the heck?” And that is a shortcoming. For instance, AI has been shown to help venture capitalists avoid big mistakes and rack up small wins, but it also consistently misses the high-risk, high-reward plays that offer VCs their greatest returns.

Many brilliant branding ideas and innovations were spawned by someone veering from the data. Data analysis could not have spawned Dyson, Red Bull, or Liquid Death. They arose from innovators trusting informed intuition, rooted in culture, human needs, and human emotions.

Good strategy begins where the data ends. AI can assist, but it still requires a human to lead it outside the safe, comfortable dimension of the known.

So What Does it Mean?

Miles Davis advised young musicians: “Don’t play what’s there, play what’s not there.”

AI excels at playing what is there, but only we can play what isn’t there—through imagination, storytelling, intuition, and a feel for culture and emotion. That requires asking better questions, thinking more broadly, and focusing less on fact-gathering and more on uncovering true insights—the things that make you go, “Hmmmm….” 

One might argue, “Yes, AI struggles with this, but so do many people.” That is the call to action. AI can’t read between the lines to fully explore and interpret what consumers mean. We can, but often don’t. Calendars are crammed. Some organizations may not support curiosity. It can feel scary to ask “stupid questions.” These are the conditions that breed IA, and over-reliance on AI will only make it worse.

Just as we train AI, we must train ourselves to do what AI can’t. If we focus more on the why behind the what, we gain the power to deliver deeper value to the clients, brands, and organizations we serve.

AI may run the models, but we can still write the stories.

artificial intelligenceconsumer researchfocus groupsLarge Language Models (LLMs)

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from James Forr

Your Logo is a Storyteller
Insights Industry News

Your Logo is a Storyteller

Weighing in on descriptive vs. non-descriptive logos.

‘Living’ Brands: Getting Macs and Avoiding Noids
Research Methodologies

‘Living’ Brands: Getting Macs and Avoiding Noids

In advertising, personification can say what words can’t.

How COVID Has Molded Consumers’ Memories
Behavioral Science

How COVID Has Molded Consumers’ Memories

Memory is fragile and flexible, which presents challenges for consumer researchers, but also great opportunities for marketers.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers