Skip to main content

Posts

Showing posts from November, 2012

How do you maximize response rates?

I might be worth thinking about this problem as a contrast to maximizing something else.

I've been thinking of response rate maximization as if it were a simple problem. "Always go after the case with the highest remaining probability of response." It has an intuitive appeal. But is it really that simple? We've been working really hard on this problem for many years. I think, in practice, our solutions are probably more complicated than that.

If you focus on the easy to respond cases early, will that really maximize the response rate? If we looked at the whole process, and set a target response rate, we might do something different to maximize the response rate. We might start with the difficult cases and then finish up with the easy cases. Groves and Couper (1998) made suggestions along these lines.

Greenberg and Stokes (1990) essentially work the problem out very formally using a Markov Decision model. They minimize calls and nonresponse rate. Their solution wasn&#…

How much has the response rate shaped our methods?

In recent posts, I've been speculating about what it might mean to optimize survey data collections to something other than the response rate. We might also look at the "inverse" problem -- how has the response rate shaped what we currently do? Of course, the response rate does not dominate every decisions that gets made on every survey. But it has had a far-reaching impact on practice. Why else would we need to expend so much energy reminding ourselves that it isn't the whole story?

The outlines of that impact are probably difficult to determine. For example, interviewers are often judged by their response rates (or possibly conditional response rates). If they were to be judged by some other criterion, how would their behavior change? For example, if interviewers were judged by how balanced their set of respondents were, how would that impact their moment-to-moment decision-making? What would their supervisors do differently? What information would sample managemen…

Which objective function?

In my last post, I argued that we need to take a multi-faceted approach to examining the possibility of nonresponse bias -- using multiple models, different approaches, etc.

But any optimization problem requires that an objective function be defined. A single quantity that is to be minimized or maximized. We might argue that the current process treats the response rate as the objective function and all decisions are made with the goal of maximizing that. It's probably the cases that most survey data collections aren't fully 'optimized' in this regard, but it may be close to optimal.

If we want to optimize differently, then we still need some kind of indicator to maximize (or minimize, depending on the indicator). A recent article in Survey Practice tried several different indicators in this role using simulation. Before placing a new indicator in this role, I think we need at least two things:

1) Experimental research to determine the impact of being tuned to a differe…