Categories
July 5, 2024
Explore tracker best practices for survey research, covering question framing, sample source, demographics balancing, cadence, key metrics, and margin of error.
Note From the Author: I wrote this based on what I have heard and seen cumulatively across both B2C research and B2B research throughout my career as an independent research consultant and my employment with firms such as NewtonX, Suzy, Dynata, and MDC Research. It appears that many companies and agencies are making highly impactful design errors or neglecting an impactful variable due to a lack of materials with comprehensive descriptions of tracker design principles. I am writing to spark conversations about how trackers are being run today and what improvements need to be made to increase their usefulness. This article will discuss some of these best practices, focusing on the critical themes of question framing, sample source, demographics/firmographics balancing, cadence, key metrics, and margin of error. The writing herein contains my thoughts, opinions, and recommendations alone and does not represent the views of any of the aforementioned research companies.
Why do you want to track specific metrics over time, and what will your business do with the information?
If these questions are difficult or impossible to answer, save your research budget. Trackers set up without clear objectives are set up for failure. They will sit on the virtual shelf and be seen as a drain on business resources. Tracking primary research is a powerful tool for measuring purchase drivers, performance, and perceptions over time. Only if they are built and run to provide a business team a guiding light, for example:
Conducting an effective tracker requires the research team to take the perspective of their customers/prospects and the business owner. If you own the business, what do you need as absolutes? You need to have the information you can act on to make an impact on the businesses (revenue, profit, growth), and you need to be able to trust that the action you are taking based on this information is based on data that is absolute. From the perspective of the customers/prospects, you would like the business to improve their offering for you or the way they conduct business with you.
One of the most valuable aspects of tracking projects is the key metrics. The key metrics are the indicators and measures used to evaluate and compare performance and perceptions over time. The key metrics should be relevant, reliable, and actionable for the brand and the market.
The key metrics should also be consistent across different survey waves to ensure comparability and validity of the data. Some of the most common and valuable key metrics for tracking projects are Top of Funnel, Unaided Awareness, Aided Awareness, Secure Customer Index, and Net Promoter Score. While I could write another article only on NPS vs SCI, plenty of articles have already been written on why NPS is not a stable or reliable measure, and I hope you have already read these.
Question framing is one of the most critical aspects of brand tracking projects. The questions asked to the respondents should be clear, concise, relevant, and unbiased. They should also be consistent across different survey waves to allow for meaningful comparisons over time. The questions should avoid leading, double-barreled, or ambiguous wording and use appropriate scales and response options.
Keep in mind you will receive different results if, in one wave, you ask about “Which brands come to mind in the beverage industry” vs. “Which brands come to mind in the alcoholic beverage industry” or “Which mid-size bank brands come to mind?” vs. “Which bank brands come to mind?” If you want to optimize an existing tracker, look at your questions with lists.
Do they have an opt-out response option (I don’t know, None of the Above, Other etch)? If not, then some of your responses have been artificially inflated and will come down with the addition of an opt-out response. Check for unbalanced scaling (2 positives and one negative), Binary questions (yes/no), and ambiguous outcomes: “Which of the following are you doing or plan to do…” – what action can the business take from this? We have percentages but can not distinguish whether this is more intent or more existing behavior.
Another crucial factor for tracking projects is the sample source. The sample source refers to the method and the criteria used to select and recruit the respondents for the survey. The sample source should be reliable, representative, and relevant to the brand and the market.
Consider the differences and trade-offs between using a verified sample, loyalty partnerships, panel exchange sample, convenience sample, or open (formerly known as river sample). While some of those sources offer speed and low investment, they provide deficits in representativeness, thoughtful responses, and validity.
Additionally, if you are running this tracker online, are you balancing the supplier (and possible sub-suppliers) on survey starts or completes? Discuss the appropriate CPI (cost per interview) with your preferred vendor. Do not be surprised if you learn that 70% or more is the vendor’s margin on the incentive, but also keep that in mind when considering recruitment.
A higher incentive can help secure more of the desired audience with less outreach. Higher CPI/revenue projects automatically route more traffic for online panel companies and panel exchanges than “non-money makers.” If you need speed, don’t try to negotiate down the CPI; look for other ways to cut costs.
The sample source should also be consistent across different survey waves to ensure comparability and validity of the data. Due to the underlying differences in how different panel companies recruit, we can not treat all companies as the same. If you run your 1,000n tracker primarily with vendor A, they secure 70% of the sample for you, and Vendor B is “filling in” the remaining 30%.
If either Vendor approaches you, claiming they can handle all 100% of the next wave and for a more reasonable price than you are investing now… buyer beware: you will only be able to track against the portion of the sample collected by the chosen vendor. Otherwise, some of your key metrics may swing several points even if the audience is the same demographically.
One cost-cutting method some client-side teams use is to run a customer tracker. These utilize the client’s own CRM database for sample for the survey. Before covering the tactical tradeoffs of this approach, it is essential to consider the health of the CRM and the health of the relationship between the business and its customers.
From the feasibility perspective, on average, we can assume that 1% of the customers invited will enter the survey, from which you will then have dropouts and over quotas, further reducing this sample. For a customer tracker with an n=1,000, you would need to send approximately 133,000 customers at least one email. Does the business have enough customers to sustain this volume?
Does the company have a healthy enough relationship with its customers that this will not cause significant attrition? Consider this carefully, even if your stakeholders believe “we are closer to our customers than the average! Our response rate will be higher, more around 8%!” Ask to verify this with a smaller test pilot.
There have been many Fortune 100s who told me their response rate was significantly over 1%, and they later had to do an about-face and ask their CRM management team for 100,000 more records. Survey engagement is not the same as engagement with articles or sales emails. From a methodology perspective, remember that managing quotas, terminations, and incentives is often challenging due to the need to be sensitive to the sample population. This approach typically requires weighting the data after collection to maintain proportions with the customer population.
If you are unfamiliar with rim weighting, I usually recommend bounds of either .5-2 (in other words someone can have at most 4x the voting power as someone else) or .3-3 (in other words, someone can have at most 10x the voting power as someone else), depending on your comfort with individual weights. You are also looking for a weighting efficiency score higher than 85% (some may want this even higher). The elephant in the room about this approach is that you have no non-customer sample; if you wish to assess top-of-funnel metrics or bottom-of-funnel metrics, this is not your ideal approach.
A related aspect of tracking projects is the demographics/firmographics balancing. The demographics/firmographics balancing refers to adjusting and weighting the sample data to make it more representative of the population of interest. The balancing is necessary to account for any differences or discrepancies between the sample and the population and to reduce the potential impact of sampling error or bias.
The balancing should be based on the relevant variables and criteria that define the brand's target audience, such as age, gender, income, education, location, occupation, industry, company size, etc. The balancing should be applied consistently across different survey waves, and reliable and updated sources of information for the population parameters should be used. Well-run trackers keep their balancing wave over wave within 1% of the target, and up to 3% variance is generally okay due to the margin of error on most projects.
Suppose you have any quota cells exceeding 5% variance. In that case, I recommend conversing with your supplier about the root cause and the possible need to weight the data to align with previous waves.
Another critical element of brand tracking projects is the cadence. The cadence refers to the frequency and the timing of the survey administration. From my observations across industries, it is far more common to run a tracking project less frequently than to run it too often. Typical cadences include:
While the answer varies based on the industry, market, and business need, anything less frequent than three times a year is generally not frequent enough. Imagine your brand awareness rises 20% wave over wave, yet you are on an annual cadence; what can you tie this increase to? Marketing efforts? The release of a new product? A price increase by your competitor? We will never know. I understand brand teams often face skepticism from their stakeholders about the cadence of their tracker or lack of budget, and this is where I recommend working with your supplier on a disjointed data collection/reporting schedule.
An example of this is when you collect samples quarterly, but the vendor only performs analysis and reporting semi-annually; this provides the opportunity to capture changes in the market more precisely without the expense of reporting and analytics every wave.
Budget and feasibility aside, the gold standard is the continuous tracker; this type of project looks to collect samples every day or nearly every day of the year. This allows you to pinpoint market shifts and look at changes in your key metrics on a rolling basis or with custom dates (for example, looking at the impact of a specific campaign).
A final factor to consider for tracking projects is the margin of error. The margin of error measures the uncertainty and the variability of the sample data, as well as the range of possible values for the population parameter. The sample size, design, confidence level, and standard deviation influence the margin of error.
The margin of error is essential for interpreting and communicating the tracking projects' results and assessing the data’s significance and reliability. The margin of error should be calculated and reported for each key metric for each survey wave. The margin of error should also be considered when comparing the data across different waves, groups, or brands and when making decisions and recommendations based on the data.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
Sign Up for
Updates
Get content that matters, written by top insights industry experts, delivered right to your inbox.