Skip to main content

Is the "long survey" dead?

A colleague sent me a link to a blog arguing that the "long survey" is dead. The blog takes the point of view that anything over 20 minutes is long. There's also a link to another blog that presents data from survey monkey surveys showing that the longer the questionnaire, the less time that is spent on each question. They don't really control for question length, etc. But it's still suggestive.

In my world 20 minutes is still a short survey. But the point is still taken. There has been some research on the effect of survey length (announced) on response rates. There probably is need for more.

Still, it might be time to start thinking of alternatives to improve response to long surveys. The most common is to offer a higher incentive, and thereby counteract the burden of the longer survey. Another alternative is to shorten the survey. This doesn't work if your questions are the ones getting tossed. Of course, substituting big data for elements of surveys is another option that is being explored.

Matrix sampling is another useful approach that is little used. It seems like you could do a power analysis for each item, each scale, each model using data from a survey and then subsample content that is overpowered. That takes a lot of work -- by central office staff -- but it might save more respondent (and interviewer) time than it costs.

Another option is to split up interview sessions across time and modes. This seems like it will become a more attractive design. A series of short surveys, completed over some amount of time.

It's probably worth exploring all of these options.


  1. This comment has been removed by the author.

  2. Hi James. The "old" surveys - face-to-face surveys of an hour or more - will stay, perhaps fewer than before. It is self-administered surveys which are evolving due to the nature of how we communicate online and on our mobiles. I think your last suggestion is very natural. Most young people may check their e-mail or facebook several times daily. A "panel survey" that asks people 1 or 2 questions several times a day for a short period might really work. Am not sure it has ever been tried, apart from time use surveys.


Post a Comment

Popular posts from this blog

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indi…

An Experimental Adaptive Contact Strategy

I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea.

Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample.

Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assum…

Is there such a thing as "mode"?

Ok. The title is a provocative question. But it's one that I've been thinking about recently. A few years ago, I was working on a lit review for a mixed-mode experiment that we had done. I found that the results were inconsistent on an important aspect of mixed-mode studies -- the sequence of modes.

As I was puzzled about this, I went back and tried to write down more information about the design of each of the experiments that I was reviewing. I started to notice a pattern. Many mixed-mode surveys offered "more" of the first mode. For example, in a web-mail study, there might be 3 mailings with the mail survey and one mailed request for a web survey. This led me to think of "dosage" as an important attribute of mixed-mode surveys.

I'm starting to think there is much more to it than that. The context matters  a lot -- the dosage of the mode, what it may require to complete that mode, the survey population, etc. All of these things matter.

Still, we ofte…