Using Store Intercepts or On-Site Surveys to Measure Customer Loyalty: Four Errors Even the Most Experienced Researchers Make and How to Avoid Them

Using Store Intercepts or On-Site Surveys to Measure Customer Loyalty: Four Errors Even the Most Experienced Researchers Make and How to Avoid Them

Most businesses want to know how satisfied their customers are, and what can be done to make them even more satisfied. However, if not done right, then store intercepts or on-site surveys used to measure customer satisfaction and loyalty can produce useless or (even worse) misleading results. Here are four errors even the most experienced researchers sometimes make and how to avoid them.

[1] No frame of reference.

Problem:  Too often store intercepts or on-site surveys don’t go beyond overall and detailed (attribute) ratings of the client’s business. This means that researchers have little context for interpreting the results and can’t answer key questions.

Solution: Design your store intercept or on-site survey to provide a frame of reference. Consider adding questions about your performance compared to expectations and/or competitors. Conducting a companion survey of employees (especially sales) with similar questions can also be useful.

[2] No tracking.

Problem:  Many businesses only perform a single intercept or on-site survey study to measure customer satisfaction. There can be logistical, financial or other barriers to repeating this research. In addition, businesses may feel that they’ve achieved their research objectives. However, it’s often valuable to know whether satisfaction is getting better or worse and it’s hard to learn this from a single study. Also, things going on in the marketplace can distort single-study results.

Solution: Track customer satisfaction over time by deploying store intercepts or on-site surveys regularly. The frequency with which you repeat this research depends upon the nature of your customer base (size, churn, purchase frequency, etc.). It may not be necessary to ask every question every time. For example, you could field a full store intercept or on-site survey once a year and a shortened quarterly version – possibly with a smaller sample size - in between.

[3] Poor sampling.

Problem:  Many researchers pay inadequate attention to sampling in measuring customer satisfaction via store intercepts or on-site surveys. Who you ask is at least as important as what you ask and ending up with an unrepresentative or otherwise inadequate sample can reduce the validity and reliability of your results.

Solution: Planning and attention to detail are vital to good sampling in using store intercepts or on-site surveys for customer satisfaction research. We have several recommendations:

• Do not use convenience samples unless absolutely necessary;

• Try to have respondents from all relevant major customer segments (geography, industry, product/service type, tenure, etc.);

• Make sure you obtain enough responses – both overall and within each segment – to support desired precision and breakouts;

• Stay in the field long enough, and check your data while in the field so you don’t hear from only the most satisfied customers, only the least satisfied, or only the two extremes (not the middle);

• In B2B research, pay attention to both the number of accounts selected and the number (and characteristics) of individuals selected within each account; and

• Think about whether you also want input, at least on some questions, from former and/or prospective customers.

[4] Flawed attribute lists.

Problem:  Many store intercepts or on-site surveys ask customers to rate companies, brands, or products on a list of attributes. The problem is that these attribute lists are often too long, incomplete, and/or imbalanced. This can reduce overall data quality (respondent fatigue leading to item non-response, lack of variation, etc.). It can also yield misleading results – for example, if your list is incomplete or skewed, what the data suggests is most important may not be what’s truly most important to your customers.

Solution: Take the time to carefully construct your attribute list. Keep it as short as possible (minimize overlap/redundancy), include a mix of functional and emotional attributes (seeking input from multiple sources), use your customers’ vocabulary (e.g., from open-ended responses, interviews, or focus groups), and be consistent in wording and scaling if making comparisons with other/previous research.

Interested in learning more? Call us at 1-800-549-7170 or email for a free 30-minute consultation on this topic. Gold Research Inc. has extensive experience in measuring customer satisfaction via store intercepts or on-site surveys, and we would love to talk with you about it!


This content was originally published by 1 GOLD RESEARCH INC . Visit their website at

Presented by

Related topics

Related articles