Research Methodologies

February 9, 2024

3 Reasons Research Teams Fail to Generate Hypotheses (and How to Address Them)

Uncover the often overlooked importance of pre-fieldwork alignment in research and gain valuable tips for overcoming related obstacles.

3 Reasons Research Teams Fail to Generate Hypotheses (and How to Address Them)

Full disclosure: it’s entirely possible to run research without hypotheses – we’ve all seen it, and most of us have done it – despite the long-ago stipulations of our seventh-grade Science teachers.

That said, ‘possible’ isn’t the same as ‘optimal’, and anyone with experience working on a study with clear hypotheses from the outset (about what consumers will think, feel, or need, or how they’ll respond to new ideas) will know the upsides. Taking time to brainstorm what all parties believe research might tell us upfront leads to more consistent stakeholder buy-in (it’s tough for anyone to develop a best guess and not feel invested in the outcome), better storytelling (i.e. ‘it’s kind of what we thought, but subtly different in these interesting ways…’), and easier impact measurement (by being able to point confidently to ‘new’ fieldwork learnings).

And yet, because hypothesis generation is technically skippable – and researchers, at least for now, remain ‘System 1’ human beings – it’s frequently skipped. Got a question? Great, let’s head into fieldwork. 

Below are what I’ve come to see as the 3 horsemen of the hypotheses apocalypse (say that out loud on your first try and you are infinitely more talented than you realize).

1. Nobody Asks 

This is straightforward, and the primary issue in most hypotheses-less projects: people aren’t given substantive opportunities to voice what they think research will uncover. ‘Substantive’ is important here (a frazzled five minutes at the unfortunate end of a project kick off Zoom call does not an adequate forum make), as is ‘opportunities’, plural. Rather than a one-and-done, hypotheses can and should evolve from project scoping right up until fieldwork, and capturing that process is no bad thing for later storytelling brainstorms.

The wider problem is that people are often unclear about roles. Who’s asking for hypotheses, and who’s sharing them? Is this the agency’s job, or is it on the client? Do we need hypotheses from people at the ‘cliff face’ of consumer interactions, or from those in the C-suite?

The good news is that straightforward challenges mean straightforward solves:

  • First, stress hypothesis generation as an open group task. It’s the project lead’s job to make sure it happens, but anyone with an interest in the outcome of the research is fair game to ask.
  • Second, ensure hypothesis generation is given dedicated time as early as possible in a project process – and repeated, at least for the core project team. Give meeting attendees a heads up they’ll be asked to share, and don’t sacrifice this agenda item to walking through the timeline – let’s face it, that will change anyway. Side note: if meetings closer to fieldwork become more focused on refining than generating, that means hypotheses are evolving as expected – so congratulations.
  • Third, record and play back hypotheses. Nothing encourages stakeholders to contribute more than knowing their input will ‘live’ somewhere, so start a shared repository, give people unfettered access, and look for opportunities to recap what’s already been learned prior to speaking with consumers (think: update emails, weekly recaps, etc.). 

2. People Don't (Yet) Know Enough

Also very understandable in a data-led world: what research-adjacent exec is going to feel comfortable putting forward an answer, however tentative, ahead of seeing the insights?

That said, the solution here is less obvious than that of our first issue. Logic would dictate you ask everyone to review pre-existing reports and generate freshly informed predictions; but that glosses over the fact that the writers of these original reports had their own perspectives that might be entirely at odds with the expertise of our own stakeholders, that consumers do this annoying thing called ‘change’, and that being asked to read umpteen (old) reports is a buzzkill. Put simply, we need the CMO to bring her take on a challenge; not to parrot the conclusions of a well-meaning 2017 Insights Manager.

Instead, it’s vital to get people comfortable with hazarding guesses – from the data-backed to the gut-felt. That means:

  • Ask the project leads to ‘go first’. It’s infinitely easier to respond to a question or existing hypothesis than it is to be given a blank canvas and asked to play fortune teller. Ask stakeholders to riff off the hunch that consumers will tell us X, or that we’ll find out we’ve been doing Y wrong for a while now. Play devil’s advocate, and be (openly) provocative: letting people say confidently that a hypothesis is off-base is a great way to warm them up.
  • Consider session composition. Asking a budding strategist and their boss’ boss to share data-less musingsin the same session is comfortable for neither party: the junior employee will be concerned about looking naïve, and (best case) the senior employee will feel nobly compelled to give others the floor. Instead, plan for separate meets based on who’s being asked to share – and think carefully about seniority level, cross-team politics, and client vs. agency composition. 

3. Everyone’s Seen Hypotheses Ignored Before

You know the drill: groups have finished, data is in, and so begins the fervent sprint to the (inevitably moved-up) deliverable due date. Right when hypotheses are most helpful in sorting the insight wheat from the chaff, they’re too often tossed aside in favor of responding directly to original research questions – which, though key, haven’t benefitted from the same all-important evolution mentioned above.

Generating collective guesses might have been helpful upfront. As paid work goes, it may even have been fun. But put yourself in the shoes of a busy colleague tasked with getting their next project off the ground; if they’re ultimately given such short shrift, why put in the legwork?

Getting the best from hypotheses means paying them due attention before and after fieldwork – i.e.:

  • Use them as an in-road to analysis. Differentiating between what’s surprising and important vs. what’s ‘just noise’ remains one of the researcher’s hardest tasks, and one that even 2024’s poster child of AI looks unlikely to take off our plates anytime soon. Hypotheses are gold dust here: whether they’re proven or invalidated, stakeholders will find it interesting. 
  • ‘Time capsule’ them in the deliverable. It’s far from the scope of this article to cover the highs and lows of research storytelling; but some of the best examples I can remember incorporate not only the story of the industry, issue, or opportunity at hand – but also the story of the project. Have your objectives slide, have your approach slide…but also think about whether a hypothesis slide deserves a space – or at least whether pre-fieldwork thinking should be reflected in-flow. Done right, research influences consensus quickly and markedly, and recommendations soonfeel like common sense. Reflecting what was known beforehand is important in preserving project impact. 

And there you have it: the essential guide to non-essential hypotheses (an easier tongue twister to finish – but deserving of praise nonetheless).

artificial intelligencemarket research industry

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers