Smaller sample sizes—money prudently saved or money foolishly wasted?

It is common practice to reduce the sample sizes used in quant studies to save money, a tendency exacerbated by the current economic environment. Though this approach can sometimes result in prudently saving some money, it can also have a disastrous result: spending the entire budget on a study that does not accomplish its business objective. Here's how.

It is common practice to reduce the sample sizes used in quant studies to save money, a tendency exacerbated by the current economic environment. Though this approach can sometimes result in prudently saving some money, it can also have a disastrous result: spending the entire budget on a study that does not accomplish its business objective. Here's how.

Suppose company XYZ plans to launch an ad campaign to increase awareness of its recent move into data-center products and services. In order to evaluate the effectiveness of the campaign, it will conduct a two-wave tracking study, each consisting of 200 respondents. Depending on the results, the campaign will either be dropped, or an alternative approach adopted. This may not seem like a very large sample, and is actually half the normal sample size of 400 that company XYZ would use, but budgets are very tight, and reducing the sample size is the easiest way to save money. Wave I of the research is conducted and awareness of its datacenter offerings is at 32%. Wave II is conducted 6 months later.

 

Outcome 1: Traditional (hoped for) scenario: money saved

Wave II revealed awareness to be 42%. At the .05 confidence level this 10% difference is a statistically significant difference and Company XYZ relies on this finding to declare the ad campaign successful and effective! The decision is made to continue with the campaign. The wisdom of using 200 rather than 400 seems justified, given that a sample of 400 would only have increased the precision of the design 2%, so that a difference of 8% would have been significantly different, whereas the sampling cost was reduced by about 50%.

 

Outcome 2: Worst-case (but much too likely) scenario: money wasted

Wave II showed an increase in awareness to 40%. This 8% difference finding is not statistically significant at the .05 confidence level. Thus, Company XYZ cannot rely on this as a "true" finding, because it may just be due to sampling error. So the ad campaign has not been proven effective and Company XYZ has to decide whether to drop or modify it. However, outcome #2 is in fact not at all clear-cut or decisive. There may have actually been a real 10% increase in awareness of Company XYZ's data-center offerings, but with a sample size of 200 and a confidence level of .05, the chances of this study finding an increase of 10% or more was, in fact, only 52%. There was a 48% chance that this finding would be completely missed (a Type II error, see below).

In other words, if this study were conducted 100 times, only 52 of those times would the real finding of a 10% awareness difference be detected. Almost half the time (48%), the study would miss a real 10% difference.

The study was too underpowered to reveal a true change in the population. Was the study effective in assessing a change in customer awareness? Probably not. So why spend the money to conduct it at all? Does Company XYZ drop the campaign or keep it? Is it really any more informed to make this decision after the study than it was prior to spending the money to conduct the research?

 

Sampling risk: Type I vs. Type II

Type I: the risk that what you think is a finding isn't one. The finding was the result of sampling error and does not reflect the reality of your market.

Type II: the risk that there was a finding to be found but you missed it. Sampling error actually concealed a reality that exists in your market.

A common mistake is to reduce sample size to save money without considering the impact of a Type II error. Statistical Power is the likelihood of detecting a real difference if it exists. When you reduce statistical power you increase the risk of missing a finding.

A market research truism: under-powered studies (often designed to save money) actually waste money because they have a very low chance (often less than 50%) of finding what you are looking for.

 

Minimize sampling error risk according to the magnitude and cost of the business decision

Both types of sampling error must be considered in the initial design phase of your quantitative studies. The magnitude of the Type I and Type II errors (see sidebar) your company finds acceptable will depend on the type of business decision you are making and magnitude of the impact that decision will have on your company.

The obvious solution to minimizing both types of sampling error is to budget for the larger sample (with sample sizes of 400 in each wave the likelihood of detecting that 10% difference at the .05 level for Company XYZ would have increased to 81%!). But that solution is not always feasible.

Reducing the confidence level in your analysis with smaller sample sizes can help resolve the Type II error (meaning you will have a greater chance of actually revealing findings that do exist), however it increases the risk of a Type I error.

 

Yarnell Solution

We have the tools to estimate the likelihood a study will be able to detect a given difference based on different sample sizes and confidence levels and can compute your risk factors with a variety of scenarios. You will make better decisions about what sample size to use, what confidence levels to apply, and more importantly how to use the survey results to best support your business decisions.

This content was provided by Yarnell Inc. Visit their website at www.yarnell-research.com.

Presented by

Related Topic

Related Articles

Related Webinars