Many researchers are rightly concerned about “the state of respondent cooperation.” In English, they’re worried that fewer and fewer people are agreeing to participate in research studies. With telephone survey cooperation rates in the single digits and online survey response rates not what they used to be, there are real reasons for concern about non-response bias and data quality.
We can wring our hands and blame all kinds of things for this decline in research participation. The usual suspects that tend to get the blame include:
- People don’t have time to take surveys anymore
- Do not call lists
- Spam filters
- People can’t tell the difference between research and sales
I agree that each of these has contributed to lower research participation. But I would also suggest that many researchers need to look in the mirror to see a major cause of the problem.
For as long as I’ve been in the field, researchers have tended to treat the people taking surveys not as people, but as “respondents.” Talking widgets, if you will. Its as if they could call their distributor and order up whatever they needed for their current project. Imagine the conversation,”I’ll take 500 men and 500 women. I only want the ones with high incomes, though. Do you have any advanced degree earners in stock? Good, I’ll take 200 of those…”
While this example is exaggerated for effect, it does sound distressingly like a typical conversation between a research supplier and a sample vendor.
The point here is that “respondents” have long been considered a consumable input to the production of the research company’s product. The result? Surveys are given to people that are boring, poorly written, irrelevant, and tedious. The primary solution to date? Throw money at them until they fill out the survey. As if people get over having their intelligence insulted and time wasted for a few dollars.
If you want people to answer your questions, you need to treat them like people.
Make sure that the survey you put out get the information you need, but look at the respondent’s experience taking the survey as much as you look at the most convenient data format for your analysis. This means considering things such as:
- Keeping it short
- Keeping it relevant
- Rethinking the format of questions
Most people have a short attention span — particularly when it comes to things that aren’t central to their lives. Contrary to what many marketers and researchers would like to think, every survey ever written falls into this category. Keep your surveys to 10 or 15 minutes. This is under most people’s survey fatigue threshold, so you have a better chance of keeping the respondent engaged and providing thoughtful answers throughout. To do this, you’ll need to limit the questioning to those that directly contribute to meeting your well defined objectives. You havedefined objectives, right? (http://ericksonmr.com/blog/2007/03/26/start-from-the-end/”>well)
Consider the audience for a survey. Studies have shown that people are happy to provide survey feedback when, among other things, the topic is relevant to them. Make sure that the people you are surveying have an interest in the topic. You’ll get better data because you are talking to people who actually know something about the subject and who are more motivated to tell you about it.
The way you choose to ask a question has a dramatic impact on people’s reaction. I’m not talking about asking leading questions here, I’m talking about the format you choose.
For example, you want to measure peoples’ opinion of several competing brands. The reflex response here would be to use a scale of some sort to capture how brands perform on some list of attributes. It isn’t at all unusual to have 20, 30, or more attributes and 4 or 5 brands. That means the respondent has to decide on a number from 1 to 5 (or 7 or 10) between 80 and 150 times. After a while, even the most dedicated respondent is going to start picking anything just to get through to the next question.
One solution to this issue is to reformat the question to a simple yes/no response. Now instead of 4 or 5 pages filled with a grid to rate each brand on each attribute, you have a single page with a grid of checkboxes. The respondent still has to make between 80 and 150 decisions, but now they are yes/no decisions, not degree questions. This is faster and easier for the respondent and so less susceptible to fatigue or boredom induced random answers.
There are certainly external forces impacting researchers’ ability to find quality respondents for surveys, but there is still plenty that can be done by the researcher to improve their response rates and data quality.
This article only scratched the surface of specific steps that could be taken. The underlying theme is to think more like marketers and less like researchers — understand what your market values and deliver it. Thinking about respondents as your market may represent a paradigm shift for some, but it will open the door to addressing the causes of poor response that are under your control.