Skip to main content


Showing posts from November, 2013

Do response propensities change with repeated calling?

I read a very interesting article by Mike Brick. The discussion of changing propensities in section 7 on pages 341-342 was particularly interesting. He discusses the interpretation of changes in average estimated response propensities over time. Is it due to changes in the composition of the active sample? Or, is it due to within-unit decreases in probability caused by repeated application of the same protocol (i.e. more calls)?
To me, it seems evident that people's propensity to respond do change. We can increase a person's probability of response by offering an incentive. We can decrease another person's probability by saying "the wrong thing" during the survey introduction.

But the article specifically discusses whether additional calls actually change the callee's probability of response. In most models, the number of calls is a very powerful predictor. Each additional call lowers the probability of response. Brick points out that there are two interpret…

Optimal Resource Allocation and Surveys

I just got back from Amsterdam where I heard the defense of a very interesting dissertation. You can find the full dissertation here. One of the chapters is already published and several others are forthcoming.

The dissertation uses optimization techniques to design surveys that maximize the R-Indicator while controlling measurement error for a fixed budget. I find this to be very exciting research as it brings together two fields in new and interesting ways. I'm hoping that further research will be spurred by this work.

Daily Propensity Models

We estimate daily propensity models for a number of reasons. A while ago, I started looking at the consistency of the estimates from these models. I worry that the estimates may be biased early in the field period.

I found this example a couple of years ago where the estimates seemed pretty consistent.

I went back recently to see what examples of inconsistent estimates I could find. I have this example where an estimated coefficient (for the number of prior call attempts) in a daily model varies a great deal over the first few months of a study.

It turns out that some of these coefficient estimates are significantly different.

The model from this example was used to classify cases. The estimated propensities were split into tertiles. It turns out that these differences in estimation only made a difference in the classification of about 15% of the cases. But that is 15% that get misclassified at least some of the time.

Speaking of costs...

I found another interesting article that talked about costs. This one, from Teitler and colleagues, described the apparent nonresponse biases present at different levels of cost per interview. This cuts to the chase on the problem. The basic conclusion was that, at least in this case, the most expensive interviews didn't change estimates.

This enables discussing the tradeoffs in a more specific way. With a known amount of the budget that didn't prove to change estimates, could you make greater improvements by getting more cases that cost less? Spending more on questionnaire design? etc.

Of course, that's easy to say after the fact. Before the fact, armed with less than complete knowledge, one might want to go after the expensive cases to be sure they are not different. Of course, I'd argue that you'd want to do that in a way that controlled costs (subsampling) until you achieve more certainty about the value of those data.