In my last post, I talked about how errors in call records might lead to bad things. If these errors are biasing (i.e. interviewers always underreport and never overreport calls -- which seems likely), then adjustments based on call records can create (more) bias in estimates. I pointed to the simulation study that Paul Biemer and colleagues carried out. They used an adjustment strategy that used the call number.

There are other ways to use the data from calls. For instance, if I'm using logistic regression to estimate the probability of response, I can fit a model with a parameter for each call. Under that approach, I'm not making an assumption about the relationship between calls and response. It's like the Kaplan-Meier estimator in survival analysis. If there is a relationship, then I can fit a logistic regression model with fewer parameters. Maybe as few as one if I think the relationship is linear. That smooths over some of the observed differences and assumes they are just sampling error. Such an approach might mitigate the impact of errors in call records.

We have also sometimes created categories out of the number of calls. This definitely smooths over

We have also tried taking the natural logarithm of the call numbers. This one makes intuitive sense to me. Under this transformation, the difference between 1 and 2 calls is much bigger than the difference between 12 and 13 calls.

Of course, I'd prefer nice, clean call records. But there may be some methods that help mitigate the impact of the mess.

There are other ways to use the data from calls. For instance, if I'm using logistic regression to estimate the probability of response, I can fit a model with a parameter for each call. Under that approach, I'm not making an assumption about the relationship between calls and response. It's like the Kaplan-Meier estimator in survival analysis. If there is a relationship, then I can fit a logistic regression model with fewer parameters. Maybe as few as one if I think the relationship is linear. That smooths over some of the observed differences and assumes they are just sampling error. Such an approach might mitigate the impact of errors in call records.

We have also sometimes created categories out of the number of calls. This definitely smooths over

__some__of the errors in underreporting, but requires the assumption that cases grouped together are essential the same. This might seem kind of odd -- it's like saying that 2 calls is the same as 3 calls if I group them together. But given that those two calls might have been at good times, while one or two of the three calls were at bad times, it doesn't seem so odd.We have also tried taking the natural logarithm of the call numbers. This one makes intuitive sense to me. Under this transformation, the difference between 1 and 2 calls is much bigger than the difference between 12 and 13 calls.

Of course, I'd prefer nice, clean call records. But there may be some methods that help mitigate the impact of the mess.

## Comments

## Post a Comment