Friday, February 28, 2014

Estimating Response Probabilities for Surveys

I recently went to a workshop on adaptive treatment regimes. We were presented with a situation where they were attempting to learn about the effectiveness of a treatment to help with a chronic condition like addiction to smoking. The treatment is applied at several points over time, and can be changed based on changes in the condition of the person (e.g. they report stronger urges to smoke). In this setup, they can learn effective treatments at the patient level.

In surveys, we only observe successful outcomes one time. We get the interview, we are done. We estimate response propensities by averaging over sets of cases. Within in any set, we assume that each person is exchangeable. Not by observing response to multiple survey requests on the same person.

Even panel surveys are only a little different. The follow-up interviews are often only with cases that responded at t=1. Even when there is follow-up with the entire sample, we usually leverage the fact that this is follow-up to a familiar survey.

I'd like to see experiments where multiple survey requests are made to the same units. It would be interesting to see if you could validate model results that way. Sadly, you might need a lot of survey requests per case (n=20+). But, hey, it's all in the name of science.

Friday, February 21, 2014

Use of Prior Data in Estimation of Daily Propensity Models

I'm working on a paper on this topic. One of the things that I've been looking at is accuracy of predictions from models that use data during the field period. I think of this as a missing data problem. The daily models can yield different estimates that are biased. For example, estimates based on today might overestimate the number of interviews tomorrow. This can happen if my estimate of the number of interviews to expect on the third call is based on a select set of cases that responded more easily (compared to the cases that haven't received a third call).

One of the examples in the paper comes from contact propensity models I did for a monthly  telephone survey a few years ago. Since it is monthly, I could use data from prior months. Getting the right set of prior data (or, in a Bayesian perspective, priors) is important. I found that the prior months data had a contact rate of 9.4%. The current month had contact rate of 10.9%, but my estimates for the current month were below that due to the weight of the prior data. Ouch.

I'm thinking that the Bayesian setup for this problem will actually work much better. I can calibrate the priors such that at a critical tipping point, the current data will play a greater role.

Friday, February 14, 2014

Are we really trying to maximize response rates?

I sometimes speculate that we may be in a situation where the following is true:
  1. Our goal is to maximize response rate
  2. We research methods to do this
  3. We design surveys based on this

Of course, the real world is never so "pure." I'm sure there must be departures from this all the time. Still, I wonder what the consequences of maximizing (or minimizing) something else would be. Could research on increasing response still be useful under a new guiding indicator?

I think that in order for older research to be useful under a new guiding indicator, the information about response has to be linked to some kind of subgroups in the sample. Indicators other than the response rate would place different values on each case (the response rate places the same value on each case). So for methods to be useful in a new world governed by some other indicator, those methods would have to useful for targeting some cases. On the simplest level, we don't want the average effect on response of incentives, for example, we want the effect of incentives on the subgroup that we need to get because it will help us balance response (as an example).

Sometimes we have this in existing research, sometimes we don't. I'm thinking that it might be there a fair amount since, in fact, we aren't pure maximizers of response rates.

Friday, February 7, 2014

Tracking Again...

I'm still thinking about this one. I had an additional thought about this. It is possible to predict which cases are likely to be difficult to locate. Couper and Oftedal have an interesting chapter in the book Methodology of Longitudinal Surveys on the topic. I also recall that the NSFG Cycle 5 documentation had a model for predicting probability of locating someone.

Given that information, it should be easy to stratify samples for differential effort. For instance, it might be better to use expensive effort early on some cases that are expected to be difficult. If this saves on the early inexpensive steps. The money saved might be trivial. But the time could be important. If you find them more quickly, perhaps you can more easily interview them.

Followers