Skip to main content

What is a "response propensity"?

We talk a lot about response propensities. I'm starting to think we actually create a lot of confusion for ourselves by the way we sometimes have these discussions. First, there is a distinction between an actual and an estimated propensity. This distinction is important as our models are almost always misspecified. It is probably the case that important predictors are never observed -- for example, the mental state of the sampled person at the moment that we happen to contact them. So that the estimated propensity and true propensity are different things.

The model selection choices we make can, therefore, have something of an arbitrary flavor to them. I think the choices we make should depend on the purpose of the model. We examined in a recent paper on nonresponse weighting whether call record information, especially the number of calls and refusal indicators, were useful predictors of response propensities for this purpose. It turns out that these variables were strong predictors of response, but just added noise to the weights since they were unrelated to many of the survey variables. I think this reflects that the fact that the survey process is noisy -- lots of variation in recruitment strategies (e.g timing of calls varies across cases, interviewers vary), unobserved mental states of sampled persons, and possibly measurement error in the paradata.

Once considering the purpose, we might think of model selection very differently. I think this is true for adaptive designs that base design features upon these estimated response propensities. Here, I think it makes sense to identify predictors in these models that are also related to the survey outcome variables. Like the post-survey adjustment example, I think this gives us the best chance to control potential nonresponse biases.

Back to the original problem I raised, I think discussion of generic response propensities might lead us astray from this goal. It can be easy to forget that their are important modeling choices and the way we make those choices will impact our potential results.

Comments

Popular posts from this blog

Assessment of Maching Learning Classifiers

I heard another interesting episode of the Data Skeptic podcast . They were discussing how a classifier could be assessed (episode 121). Many machine learning models are so complex that a human being can't really interpret the meaning of the model. This can lead to problems. They gave an example of a problem where they had a bunch of posts from two discussion boards. One was atheist and the other board was composed of Christians. They tried to classify each post as being from one or the other board. There was one poster who posted heavily on the Christian board. His name was Keith. Sadly, the model learned that if the person who was posting was named Keith, then they were Christian. The problem is that this isn't very useful for prediction. It's an artifact of the input data. Even cross-validation would eliminate this problem. A human being can see the issue, but a model can't. In any event, the proposed solution was to build interpretable models in local areas of t...

Tailoring vs. Targeting

One of the chapters in a recent book on surveying hard-to-reach populations looks at "targeting and tailoring" survey designs. The chapter references this paper on the use of the terms among those who design health communication. I thought the article was an interesting one. They start by saying that "one way to classify message strategies like tailoring is by the level of specificity with which characteristics of the target audience are reflected in the the communication." That made sense. There is likely a continuum of specificity ranging from complete non-differentiation across units to nearly individualized. But then the authors break that continuum and try to define a "fundamental" difference between tailoring and targeting. They say targeting is for some subgroup while tailoring is to the characteristics of the individual. That sounds good, but at least for surveys, I'm not sure the distinction holds. In survey design, what would constitute ...

What is Data Quality, and How to Enhance it in Research

  We often talk about “data quality” or “data integrity” when we are discussing the collection or analysis of one type of data or another. Yet, the definition of these terms might be unclear, or they may vary across different contexts. In any event, the terms are somewhat abstract -- which can make it difficult, in practice, to improve. That is, we need to know what we are describing with those terms, before we can improve them. Over the last two years, we have been developing a course on   Total Data Quality , soon to be available on Coursera. We start from an error classification scheme adopted by survey methodology many years ago. Known as the “Total Survey Error” perspective, it focuses on the classification of errors into measurement and representation dimensions. One goal of our course is to expand this classification scheme from survey data to other types of data. The figure shows the classification scheme as we have modified it to include both survey data and organic f...