Skip to main content

Posts

Showing posts from February, 2014

Estimating Response Probabilities for Surveys

I recently went to a workshop on adaptive treatment regimes. We were presented with a situation where they were attempting to learn about the effectiveness of a treatment to help with a chronic condition like addiction to smoking. The treatment is applied at several points over time, and can be changed based on changes in the condition of the person (e.g. they report stronger urges to smoke). In this setup, they can learn effective treatments at the patient level. In surveys, we only observe successful outcomes one time. We get the interview, we are done. We estimate response propensities by averaging over sets of cases. Within in any set, we assume that each person is exchangeable. Not by observing response to multiple survey requests on the same person. Even panel surveys are only a little different. The follow-up interviews are often only with cases that responded at t=1. Even when there is follow-up with the entire sample, we usually leverage the fact that this is follow-up to ...

Use of Prior Data in Estimation of Daily Propensity Models

I'm working on a paper on this topic. One of the things that I've been looking at is accuracy of predictions from models that use data during the field period. I think of this as a missing data problem. The daily models can yield different estimates that are biased. For example, estimates based on today might overestimate the number of interviews tomorrow. This can happen if my estimate of the number of interviews to expect on the third call is based on a select set of cases that responded more easily (compared to the cases that haven't received a third call). One of the examples in the paper comes from contact propensity models I did for a monthly  telephone survey a few years ago. Since it is monthly, I could use data from prior months. Getting the right set of prior data (or, in a Bayesian perspective, priors) is important. I found that the prior months data had a contact rate of 9.4%. The current month had contact rate of 10.9%, but my estimates for the current month ...

Are we really trying to maximize response rates?

I sometimes speculate that we may be in a situation where the following is true: Our goal is to maximize response rate We research methods to do this We design surveys based on this Of course, the real world is never so "pure." I'm sure there must be departures from this all the time. Still, I wonder what the consequences of maximizing (or minimizing) something else would be. Could research on increasing response still be useful under a new guiding indicator? I think that in order for older research to be useful under a new guiding indicator, the information about response has to be linked to some kind of subgroups in the sample. Indicators other than the response rate would place different values on each case (the response rate places the same value on each case). So for methods to be useful in a new world governed by some other indicator, those methods would have to useful for targeting some cases. On the simplest level, we don't want the average effect on ...

Tracking Again...

I'm still thinking about this one. I had an additional thought about this. It is possible to predict which cases are likely to be difficult to locate. Couper and Oftedal have an interesting chapter in the book Methodology of Longitudinal Surveys on the topic. I also recall that the NSFG Cycle 5 documentation had a model for predicting probability of locating someone. Given that information, it should be easy to stratify samples for differential effort. For instance, it might be better to use expensive effort early on some cases that are expected to be difficult. If this saves on the early inexpensive steps. The money saved might be trivial. But the time could be important. If you find them more quickly, perhaps you can more easily interview them.