Skip to main content

Posts

Showing posts from December, 2012

There are call records, and then there are call records...

In my last post, I talked about how errors in call records might lead to bad things. If these errors are biasing (i.e. interviewers always underreport and never overreport calls -- which seems likely), then adjustments based on call records can create (more) bias in estimates. I pointed to the simulation study that Paul Biemer and colleagues carried out. They used an adjustment strategy that used the call number. There are other ways to use the data from calls. For instance, if I'm using logistic regression to estimate the probability of response, I can fit a model with a parameter for each call. Under that approach, I'm not making an assumption about the relationship between calls and response. It's like the Kaplan-Meier estimator in survival analysis. If there is a relationship, then I can fit a logistic regression model with fewer parameters. Maybe as few as one if I think the relationship is linear. That smooths over some of the observed differences and assumes they

Errors in Call Records

I've been working with call records for a long while now. I started working with them in telephone labs. The quality of the record wasn't much of an issue there. But then I saw Paul Biemer give a presentation where he investigated this issue. I've been thinking about it a lot more over the last year or so. I recently saw that Biemer and colleagues have now published a paper on the topic. I read it over the weekend. They call these data "level-of-effort" (LOE) paradata. I agree with their conclusion that "if modelling with LOE data is to progress, the issue of data errors and their effects should be studied." (p. 17).

Historical Controls in Incentive Experiments

We often run experiments with incentives. The context seems to matter a lot, and the value of the incentive keeps changing, so we need to run many experiments to find the "right" amount to offer. These experiments often come on repeated cross-sectional designs where we have a fair amount of experience. That is, we have already repeated the survey several times with a specific incentive payment. Yet, when we run experiments in this situation, we ignore the evidence from prior iterations. Of course, there are problems with the evidence from past iterations. There can be differences over time in the impact that a particular incentive can have. For example, it might be that as the value of the incentive declines through inflation, the impact on response rates lessens. There may be other differences that are associated with other changes that are made in the survey over time (even undocumented, seemingly minor changes). On the other hand, to totally discount this evidence se