Welcome to the inaugural post to the Random Sampling Blog v2.0.
A quick look at the archive will show that this had been a fairly active blog through 2007 and part of 2008, and then, well…nothing.
In some ways I’m aiming to pick up where I left off in 2008. In some ways I can’t simply resume – the world has changed too much since that last post in May of 2008. The economy has certainly changed, business has changed, the practice of market research has changed, and Erickson Research has changed.
Back in v1.0, I focused mainly on the practice of research. Many of the posts were intended to be mini-tutorials on how to do better research. Here in v2.0, I plan to continue some of that, but I feel that I need to talk about much bigger issues, like the role of research in business and the huge paradigm shift that needs to happen in how research is done.
As they say, the journey of 1,000 miles starts with a single step. This post is really more about taking that first step to revive the Random Sampling Blog than it is writing for an audience. Perhaps those of you who do find your way here can become regular visitors who hold my feet to the fire about delivering valuable content and making meaningful contributions to the future of the industry. In return, you’ll get some good ideas that help you navigate what I think are the biggest changes in how research gets done since Gallup made survey research the norm some 60-70 years ago.
Researchers like their statistics. There’s something about having a number that’s comforting. It’s a specific number, right there on paper – or screen. We even get to talk about the margin of error and the level of certainty that goes with it.
All this makes it really easy to forget a very important point – accuracy and precision are two different things.
Accuracy often becomes an unintentional casualty in our quest for precision by using bigger scales and more detailed questions.
How can that be?
When I work with a client on a survey, often the conversation about using a 10-point or a 7-point scale instead of a 5-point scale comes up. The usual line of thinking is that more points on the scale allow the respondent to be more precise with their answer – so our research is more precise. This statement is true, but it ignores a more important point. More choices – like when there are more points on the scale – make it more difficult for the respondent to answer the question. Honestly, could you explain the difference between rating one brand a 6 and another brand a 5 on a 7-point scale?
Probably not. Certainly not in any consistent way.
The same is true when we try to drill down to a very micro level about someone’s attitudes and behaviors. There is a ton of research – particularly when it comes to behavior – that tells us that people’s self-reporting is often wrong. Yet we persist in asking very detailed questions. We would be much better off sacrificing some of the alleged precision we think we’re getting for a question that is easier to answer, and therefore more accurate.
It all comes down to paying close attention to the questions you are asking.
I’ve written about this before – ask questions that people can answer! That will often mean sacrificing “precision.” What you get in return, though, is accuracy. That’s a trade off I’m willing to make anytime.
Regular readers know how much I love to get up on my soapbox about presenting. Today, I came across a short, clear video from the folks at Speaking About Presenting. It compares the effectiveness of four different methods of displaying information in PowerPoint.
Check it out.