We identify new market and product opportunities to help companies grow.
Call us: 312.276.5140 Email: info@ericksonresearch.com

Erickson Research Blog

All posts in Statistics

Research Tip: Statistically Significant doesn’t mean Strategically Significant

One of the good things, and one of the bad things, about quantitative research is the sense of truth and authority that numbers lend to a situation.  It gives managers something concrete to work with, a measurement of progress, a sense of the market, an objective description of reality based in mathematics.

The dark side of the numbers is that they are just that, numbers; numbers that have no meaning outside of the interpretation given to them by the analyst or manager.  So, while the numbers can be perfectly accurate from a mathematical standpoint, there is no way to know from the numbers themselves if the interpretation is correct or if the number describes the right thing, or describes anything for that matter.

There are many research analysts out there who will say, with the authority of hundreds of years of accumulated scholarly work in statistics behind them, “The difference is statistically significant at the 95% confidence level.”  Or they might say, “The difference wasn’t significant.”

Unfortunately, too many analysts stop there.

When computers spit out data tabulations, the machine has no way to understand what is important and what isn’t, so statistical tests are performed on everything.  In my years as a researcher, I’ve seen the little notation in the tabulations flag earth-shattering findings like:

  • People with higher incomes tend to have more education
  • Women are more likely than men to see an OB-GYN
  • Younger people use text messaging more than older people All statistically

All meaningless from the perspective of addressing the business problem at hand.

The flip side is that something directly related to the research issue isn’t flagged as statistically significant, but the numbers between some interesting groups are more than a few points apart.  Just because it didn’t pass a statistical test doesn’t mean it should be disregarded as an important finding.  I would suggest testing items like this at a lower confidence level to see if it becomes significant before simply dismissing it.

Whenever you look at research results, particularly differences between group of people, you need to ask yourself a few questions:

  • Does this make any sense?
  • Is there a logical reason that this situation would exist?
  • Does this have any relevance to the research issues being investigated?

The finding should only make it into the report if you can answer “yes” to all of these questions.  If you answered no to either of the first two questions, a check of the data is in order to make sure everything is correct.  If it is, you may have had a problem with the sample frame or the wording of the question, or (the most likely scenario) you simply have a statistical artifact.

 If you answered no to the last question, you probably have a perfectly valid “finding” that would elicit a chorus of “duh” and/or “so what?” from your audience.  The example “findings” I mentioned are perfect examples of these meaningless significant results.

Managers who use research are relying on the analyst to find the valuable nuggets in the data.  You can only do that if you understand the difference between statistically significant and strategically significant.  Ask the questions suggested in this article and you’ll be well on your way.