My feeling is that this is a big question facing our field. In my view, we need both of these to be successful.
The argument runs something like this. If you are going to use those variables (frame data and paradata) for your nonresponse adjustments, then why bother using them to alter your data collection? Wouldn't it be cheaper to just use them in your adjustment strategy?
There are several arguments that can be used when facing these kinds of questions. The main point I want to make here is that I believe that this is an empirical question. Let's call X my frame variable and Y the survey outcome variable. If I assume that the relationship between X and Y is the same no matter what the response rate for categories of X, then, sure, it might be cheaper to adjust. But that doesn't seem to be true very often. And that is an empirical question.
There are two ways to examine this question. [Well, whenever someone says definitively there are "two ways of doing something," in my head, I'm thinking "at least two ways."] First, use existing data and simulate adjusted estimates at different response rates. Second, run an experiment. Compare the two methods. I think we actually need both of these things. It is an important question. We might as well be thorough in our research aimed at understanding it.
The argument runs something like this. If you are going to use those variables (frame data and paradata) for your nonresponse adjustments, then why bother using them to alter your data collection? Wouldn't it be cheaper to just use them in your adjustment strategy?
There are several arguments that can be used when facing these kinds of questions. The main point I want to make here is that I believe that this is an empirical question. Let's call X my frame variable and Y the survey outcome variable. If I assume that the relationship between X and Y is the same no matter what the response rate for categories of X, then, sure, it might be cheaper to adjust. But that doesn't seem to be true very often. And that is an empirical question.
There are two ways to examine this question. [Well, whenever someone says definitively there are "two ways of doing something," in my head, I'm thinking "at least two ways."] First, use existing data and simulate adjusted estimates at different response rates. Second, run an experiment. Compare the two methods. I think we actually need both of these things. It is an important question. We might as well be thorough in our research aimed at understanding it.
Shouldn't the question be: does adjusting data collection bring any improvements over just adjusting with weights? After all, just using weighting adjustment is most likely always cheaper than adjusting data collection, isn't it?
ReplyDeleteI think still don't understand your point about this being an empirical question. If the relationship between X and Y is the same, no matter what response rate for categories of X, then adjusting the data collection won't bring any other benefits that weighting adjustments will. Otherwise, adjusting data collection might bring additional advantages. I understand that whether the assumption is correct or not is an empirical question, but in my view this is a question that the survey designer/analyst should answer for his/her own survey and variables. So, what I mean is, as survey methodologists, we should show in which situations adjusting the data collection or just using weights is more beneficial. Then, the practitioners should check in their own surveys in which conditions they fit better.
I agree that we would almost always want to adjust. I would say, on the other hand, that adjusting data collection might actually save costs. Groves and Heeringa had examples like that. And the title says controlling Costs and errors.
ReplyDeleteYes, the assumption has to be verified. My guess, most of the time, we could benefit from adjusting data collection.