Skip to main content

Posts

Showing posts from June, 2013

Call Windows as a Pattern

The paradata book , edited by Frauke Kreuter, is out! I have a chapter in the book on call scheduling. One of the problems that I mention is how to define call windows. The goal should be to create homogenous units. For example, I made the following heatmap that shows contact rates by hour for a face-to-face survey. The figure includes contact rates for all cases and for the subset of cases that were determined to be eligibile I used this heatmap to define contiguous call windows that were homogenous with respect to contact rates. I used ocular inspection to define the call windows. I think this could be improved. First, clustering techniques might produce more efficient results. I assumed that the call windows had to be contiguous, this might not be true. Second, along what dimension do we want these windows to be homogenous? Contact rates is really a proxy. We want them to be homogenous with respect to the results of next call on any case, or really our final goal of inter

What is "responsive design"?

This is a question that I get asked quite frequently. Most of what I would want to say on the topic is in this paper I wrote with Mick Couper a couple of years ago. I have been thinking that a little historical context might help in answering such a question. I'm not sure the paper we wrote does that. I imagine that surveys of old were designed ahead of time, carried out, and then evaluated after they were complete. Probably too simple, but it makes sense. In field surveys, it was hard to even know what was happening until it was all over. As response rates declined, it became more difficult to manage surveys. The uncertainty grew. Surveys ended up making ad hoc changes more and more frequently. "Oh no, we aren't hitting our targets. Increase the incentive!" That seems like a bad process. There isn't any planning, so bad decisions and inefficiency are more likely. And it's hard to replicate a survey that includes a "panic" phase. Not to put wo

Incentive Experiments

This post by Andy Peytchev got me thinking about experimental results. It seems like we spend a lot of effort on experiments that are replicated elsewhere. I've been part of many incentive experiments. Only some of those results are published. It would be nice if more of those results were widely available. Each study is a little different, and may need to evaluate incentives for its specific "essential conditions." And some of that replication is good, but it seems that the overall design of these experiments is pretty inefficient. We typically evaluate incentives at specific points in time, then change the incentive. It's like a step function. I keep thinking there has to be inefficiency in that process. First, if we don't choose the right time to try a new experiment then we will experience losses in efficiency and/or response rates. Second, we typically ignore our prior information and allocate half the sample to each of two conditions. Third, we set up a

Reinventing the wheel...

This blog on how machine learning reinvented many of the techniques first developed in statistics got me thinking. When I dip into non-survey methods journals to see how research from survey methodology is used, it sometimes seems like people in other fields are either not aware of our research or only vaguely aware. For instance, it seems like there is research on incentives in several substantive fields that goes on without awareness across the disciplines. It's not that everyone needs to be a survey methodologist, but it would be nice if there were more awareness of our research. Otherwise there is the risk that researchers in other fields will simply reinvent the wheel.

Nonresponse Bias Analysis

I've been thinking about a nonresponse bias analysis that I am working on for a particular project. We often have this goal for these kinds of analyses of saying we could lower response rates without increasing the relative bias. I wrote about this risks of this approach in a recent post . Now I'm wondering about alternatives. There may be risks from lowering response rates, but is it wise to continue sinking resources into producing high response rates as a protection against potential biases? A recently read an article by Holle and colleagues where they actually worked out the sample size increase they could afford under reduced effort (fewer calls, no refusal conversion, etc.). They made explicit tradeoffs in this regard between the risk of nonresponse bias (judged to be minimal) and sampling error. I'm still not completely satisfied with this approach. I'd like to see design that considers the risks and allocates resources proportional to the risk in some way.

Renamed the blog

I wanted to rename the blog from the moment that I first named it. Which just means that I should have mulled it over a little more back then. Oh well, what's in a name anyway...