I've been working on a review of patterns of nonresponse to a large survey on which I worked. In my original plan, I looked at things that are related response, and then I looked at things that are related to key statistics produced by the survey. "Things" include design features (e.g. number of calls, refusal conversions, etc.) and paradata or sampling frame data (e.g. Census Region, interviewer observations about the sampled unit, etc.).
We found that there were some things that heavily influenced response (e.g. calls) that did not influence the key statistics. Good, since more or less of that feature, although important for sampling error, doesn't seem important with respect to nonresponse bias.
There were also some that influenced the key statistics but not response. For example, interviewer observations we have for the study. The response rates are close across subgroups of these estimates. As a result, I won't have to rely on large weights to get to unbiased estimates. Or, in another way of looking at it, I empirically tested what estimates would have looked like had I relied on that assumption at an earlier phase of the survey process.
And of course, there were some that predicted neither. And there were none that strongly predicted both response and key statistics.
This result seems good to me. Why? We haven't allowed any variables to be highly predictive of response. If we had, we would need to rely upon strong assumptions (i.e. large nonresponse adjustments) to motivate unbiased estimates. But we can also predict some of the key statistics. This relationship might be confounded, but it still seems good that we have some useful predictors of the key statistics.
In any event, organizing the analysis along these lines was helpful for me. I didn't develop a single-number characterization of the quality of our data, but I did tell a somewhat coherent story that I believe provides convincing evidence that our process produces good quality data.
We found that there were some things that heavily influenced response (e.g. calls) that did not influence the key statistics. Good, since more or less of that feature, although important for sampling error, doesn't seem important with respect to nonresponse bias.
There were also some that influenced the key statistics but not response. For example, interviewer observations we have for the study. The response rates are close across subgroups of these estimates. As a result, I won't have to rely on large weights to get to unbiased estimates. Or, in another way of looking at it, I empirically tested what estimates would have looked like had I relied on that assumption at an earlier phase of the survey process.
And of course, there were some that predicted neither. And there were none that strongly predicted both response and key statistics.
This result seems good to me. Why? We haven't allowed any variables to be highly predictive of response. If we had, we would need to rely upon strong assumptions (i.e. large nonresponse adjustments) to motivate unbiased estimates. But we can also predict some of the key statistics. This relationship might be confounded, but it still seems good that we have some useful predictors of the key statistics.
In any event, organizing the analysis along these lines was helpful for me. I didn't develop a single-number characterization of the quality of our data, but I did tell a somewhat coherent story that I believe provides convincing evidence that our process produces good quality data.
Comments
Post a Comment