Friday, December 27, 2013

Equal Effort... or Equal Probabilities

I've been reading an article on locating respondents in a panel survey. The authors were trying to determine what the protocol should be. They reviewed the literature to see what the maximum number of calls should be.

As I noted in my last post, I was recently involved in a series of discussions on the same topic. But when I was reading this article, I thought immediately about how much variation there is between call sequences with the same number of calls. The most extreme case is calling a case three times in one day is not the same as calling a case three times over the course of three weeks.

I think the goal should be to apply protocols that have similar rates of being effective, i.e. produce similar response probabilities. But there aren't good metrics to measure the effectiveness of the many different possibilities. Practitioners need something that can evaluate how the chain of calls produce an overall probability of response. Using call-level estimates might be one of getting such an estimate. The models would need to include factors for the different call windows that have been tried, possibly the sequence, time between calls. I worry that it gets too complex to model. Perhaps the sequence analysis of Kreuter and Kohler would be useful for this purpose.

Friday, December 20, 2013

Simulation of Limits

In my last post, I advocated against truncating effort. In this post, I'm going to talk about doing just that. Go figure.

We were discussing call limits on a project that I'm working on. This is a study that we plan to repeat in the future, so we're spending a fair amount of time experimenting with design features on this first wave.

There is a telephone component to the survey, so we've been working on the question of how to specify the calling algorithm and, in particular, what if any ceiling we should place on the number of calls.

One way to look at it is to look at the distribution of final outcomes by call number -- sort of like a life table. Early calls are generally more productive (i.e. produce a final outcome) than late calls. You can look at the life table and see after which call very few interviews are obtained. You might truncate the effort at that point.

The problem is that simulating what would happen if you place a ceiling on the number of calls isn't the same thing as actually placing a limit on calls. Especially in a phone lab. In the phone lab, if you don't do something (place a call on a case over the limit), then you are going to call a case that wouldn't have otherwise been called. Even if there are fewer hours of interviewing, this is likely to change how the calls are distributed over time.

Telephone labs are a complex system. Sometimes it feels like every time you turn a knob to change one setting, something unexpected happens somewhere else in the system. It makes for an interesting problem.

Saturday, December 14, 2013

On the mutability of response probabilities

I am still thinking about the estimation and use of response propensities during data collection. One tactic that may be used is to identify low propensity cases and truncate effort on them. This is a cost saving measure that makes sense if truncating the effort doesn't lead to a change in estimates. 

I do have a couple of concerns about this tactic. First, each step back may seem quite small. But if we take this action repeatedly, we may end up with a cumulative change in the estimate that is significant. One way to check this is to continue the truncated effort for a subsamples of cases. 

Second, and more abstractly, I am concerned that our estimates of response propensities will become reified in our minds.  That is, a low propensity case is always a low propensity case and there is nothing to do about that. In fact, the propensity is always conditional upon the design under which it is estimated. We ought to be looking for design features that change those probabilities. Preferably, design features that change low prob cases to high prob. I think this is the idea behind "phase capacity" in responsive design. 

Friday, December 6, 2013

More on changing response propensities

I've been thinking some more about this issue. A study that I work on monitors the estimated mean response propensities every day. The models are refit each day and the estimates updated. The mean estimated propensity of the active cases for each day is then graphed. Each day they decline.

The study has a second phase. In the second, phase, the response probabilities start to go up. Olson and Groves wrote a paper using these data. They argue that the changed design has changed the probabilities of response. I agree with that point of view in this case.

But I also recently finished a paper that looked at the stability of the estimated coefficients over time from models that are fit daily on an ever increasing dataset. The coefficients become quite stable after the first quarter. So the increase in probabilities in the second phase isn't due to changes in the coefficients.

The response probabilities we monitor don't account for the second phase (there's no predictor for that). They are based on call records.  So how does the propensity go up? More calling should only decrease the probability of response... unless some other things change. My hypothesis is that cases started to make more appointments in the second phase than before. There was more contact. In general, things that are evidence of an increasing probability of response and are also in the model started to happen. At some point, I'd like to look at the specifics of that.

Followers