My last post was about balancing response. I expressed the view that lowering response rates for subgroups to that of the lowest responding group might not be beneficial. But I left open the question of why we might benefit from balancing on covariates that we have and can use in adjustment.
At AAPOR, Barry Schouten presented some results of an empirical examination of this question. Look here for a paper he has written on this question. I have some thoughts that are more theoretical or heuristic on this question.
I start from the assumption that we want to improve response rates for low-responding groups. While true that we can adjust for these response rate differences, we can at least empirically verify this by improving response for some groups. Does going from a 40% to a 60% response rate for some subgroup change estimates for that group? Particularly when that movement in response rates results from a change in design, we can partially verify our assumptions that nonresponders and responders are similar within this subgroup.
Of course, in the end, we will have less than 100% response and will need to make some assumptions. But we can test those assumptions in data collection. As a hypothetical, push the adjustment strategy to the extreme. Imagine a survey with four subgroups. We discover that three of them have extremely poor response rates and the fourth is easily interviewed. Should we take one interview in each of the three low-responding subgroups, the remainder from the high-responding fourth group, and then simply adjust the data?
Which brings me back to the variance question that I keep avoiding....
At AAPOR, Barry Schouten presented some results of an empirical examination of this question. Look here for a paper he has written on this question. I have some thoughts that are more theoretical or heuristic on this question.
I start from the assumption that we want to improve response rates for low-responding groups. While true that we can adjust for these response rate differences, we can at least empirically verify this by improving response for some groups. Does going from a 40% to a 60% response rate for some subgroup change estimates for that group? Particularly when that movement in response rates results from a change in design, we can partially verify our assumptions that nonresponders and responders are similar within this subgroup.
Of course, in the end, we will have less than 100% response and will need to make some assumptions. But we can test those assumptions in data collection. As a hypothetical, push the adjustment strategy to the extreme. Imagine a survey with four subgroups. We discover that three of them have extremely poor response rates and the fourth is easily interviewed. Should we take one interview in each of the three low-responding subgroups, the remainder from the high-responding fourth group, and then simply adjust the data?
Which brings me back to the variance question that I keep avoiding....
Comments
Post a Comment