Skip to main content

Balancing Response

I have been back from the AAPOR conference for a few days. I saw several presentations that had me thinking about the question of balancing response. By "balancing response," I mean actively trying to equalize response rates across subgroups. I can define the subgroups using data that are complete (i.e. on the sampling frame or paradata available for responders and nonresponders).

I think there probably are situations where balancing response might be a bad thing. For instance, if I'm trying to balance response across two groups, persons 18-44 and 45+, and I have a 20% response rate among 18-44 year olds and a 70% response rate among 45+ persons, I might "balance response" by stopping data collection for 45+ persons when I get a 20% data collection. It's always easy to lower response rates. It might even be less expensive to do so.

But I think such a strategy avoids the basic problem. How might I optimize the data collection to reduce the risk of nonresponse bias? In my mind, that implies allocating your resources differentially. In the example I just gave, I think that would mean reallocating resources from older persons to younger persons. I saw an interesting presentation from the Census Bureau on the National Survey of College Graduates that did something like that.

Of course, this opens up new questions.... like how do we account for sampling error in this allocation? And, why not just adjust for the different response rates after the survey is complete? I'll come back to that later.

Comments

Popular posts from this blog

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indi…

An Experimental Adaptive Contact Strategy

I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea.

Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample.

Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assum…

Is there such a thing as "mode"?

Ok. The title is a provocative question. But it's one that I've been thinking about recently. A few years ago, I was working on a lit review for a mixed-mode experiment that we had done. I found that the results were inconsistent on an important aspect of mixed-mode studies -- the sequence of modes.

As I was puzzled about this, I went back and tried to write down more information about the design of each of the experiments that I was reviewing. I started to notice a pattern. Many mixed-mode surveys offered "more" of the first mode. For example, in a web-mail study, there might be 3 mailings with the mail survey and one mailed request for a web survey. This led me to think of "dosage" as an important attribute of mixed-mode surveys.

I'm starting to think there is much more to it than that. The context matters  a lot -- the dosage of the mode, what it may require to complete that mode, the survey population, etc. All of these things matter.

Still, we ofte…