The goal of the stopping rule for surveys is to govern the process with something that is related to the nonresponse bias (under an assumed, but reasonable model).
Although we don't discuss it much in the article, I like to speculate about the effect this might have on data collection practices. If the response rate is the key metric, then the data collection process should focus on interviewing the easiest to interview set of cases that meet the target response rate. Of course, it's a bit more 'random' than that in practice. But that should be the logic.
What would the logic be under a different key metric (or stopping rule)? Would data collection organizations need to learn which cases get you to your target most efficiently? How would those cases be different? It seems that in this situation there would be a bigger reward for exploring the entire covariate space on the sampling frame. How do you go about doing that?
There are a set of alternative indicators out there. Groves and company attempted to delineate them in a recent article in Survey Practice. Perhaps one key question to ask for each indicator is how would it affect data collection?
Although we don't discuss it much in the article, I like to speculate about the effect this might have on data collection practices. If the response rate is the key metric, then the data collection process should focus on interviewing the easiest to interview set of cases that meet the target response rate. Of course, it's a bit more 'random' than that in practice. But that should be the logic.
What would the logic be under a different key metric (or stopping rule)? Would data collection organizations need to learn which cases get you to your target most efficiently? How would those cases be different? It seems that in this situation there would be a bigger reward for exploring the entire covariate space on the sampling frame. How do you go about doing that?
There are a set of alternative indicators out there. Groves and company attempted to delineate them in a recent article in Survey Practice. Perhaps one key question to ask for each indicator is how would it affect data collection?
Comments
Post a Comment