Skip to main content

There are call records, and then there are call records...

In my last post, I talked about how errors in call records might lead to bad things. If these errors are biasing (i.e. interviewers always underreport and never overreport calls -- which seems likely), then adjustments based on call records can create (more) bias in estimates. I pointed to the simulation study that Paul Biemer and colleagues carried out. They used an adjustment strategy that used the call number.

There are other ways to use the data from calls. For instance, if I'm using logistic regression to estimate the probability of response, I can fit a model with a parameter for each call. Under that approach, I'm not making an assumption about the relationship between calls and response. It's like the Kaplan-Meier estimator in survival analysis. If there is a relationship, then I can fit a logistic regression model with fewer parameters. Maybe as few as one if I think the relationship is linear. That smooths over some of the observed differences and assumes they are just sampling error. Such an approach might mitigate the impact of errors in call records.

We have also sometimes created categories out of the number of calls. This definitely smooths over some of the errors in underreporting,  but requires the assumption that cases grouped together are essential the same. This might seem kind of odd -- it's like saying that 2 calls is the same as 3 calls if I group them together. But given that those two calls might have been at good times, while one or two of the three calls were at bad times, it doesn't seem so odd.

We have also tried taking the natural logarithm of the call numbers. This one makes intuitive sense to me. Under this transformation, the difference between 1 and 2 calls is much bigger than the difference between 12 and 13 calls.

Of course, I'd prefer nice, clean call records. But there may be some methods that help mitigate the impact of the mess.

Comments

Popular posts from this blog

Assessment of Maching Learning Classifiers

I heard another interesting episode of the Data Skeptic podcast . They were discussing how a classifier could be assessed (episode 121). Many machine learning models are so complex that a human being can't really interpret the meaning of the model. This can lead to problems. They gave an example of a problem where they had a bunch of posts from two discussion boards. One was atheist and the other board was composed of Christians. They tried to classify each post as being from one or the other board. There was one poster who posted heavily on the Christian board. His name was Keith. Sadly, the model learned that if the person who was posting was named Keith, then they were Christian. The problem is that this isn't very useful for prediction. It's an artifact of the input data. Even cross-validation would eliminate this problem. A human being can see the issue, but a model can't. In any event, the proposed solution was to build interpretable models in local areas of t...

Tailoring vs. Targeting

One of the chapters in a recent book on surveying hard-to-reach populations looks at "targeting and tailoring" survey designs. The chapter references this paper on the use of the terms among those who design health communication. I thought the article was an interesting one. They start by saying that "one way to classify message strategies like tailoring is by the level of specificity with which characteristics of the target audience are reflected in the the communication." That made sense. There is likely a continuum of specificity ranging from complete non-differentiation across units to nearly individualized. But then the authors break that continuum and try to define a "fundamental" difference between tailoring and targeting. They say targeting is for some subgroup while tailoring is to the characteristics of the individual. That sounds good, but at least for surveys, I'm not sure the distinction holds. In survey design, what would constitute ...

What is Data Quality, and How to Enhance it in Research

  We often talk about “data quality” or “data integrity” when we are discussing the collection or analysis of one type of data or another. Yet, the definition of these terms might be unclear, or they may vary across different contexts. In any event, the terms are somewhat abstract -- which can make it difficult, in practice, to improve. That is, we need to know what we are describing with those terms, before we can improve them. Over the last two years, we have been developing a course on   Total Data Quality , soon to be available on Coursera. We start from an error classification scheme adopted by survey methodology many years ago. Known as the “Total Survey Error” perspective, it focuses on the classification of errors into measurement and representation dimensions. One goal of our course is to expand this classification scheme from survey data to other types of data. The figure shows the classification scheme as we have modified it to include both survey data and organic f...