In my last post, I argued that we need to take a multi-faceted approach to examining the possibility of nonresponse bias -- using multiple models, different approaches, etc.
But any optimization problem requires that an objective function be defined. A single quantity that is to be minimized or maximized. We might argue that the current process treats the response rate as the objective function and all decisions are made with the goal of maximizing that. It's probably the cases that most survey data collections aren't fully 'optimized' in this regard, but it may be close to optimal.
If we want to optimize differently, then we still need some kind of indicator to maximize (or minimize, depending on the indicator). A recent article in Survey Practice tried several different indicators in this role using simulation. Before placing a new indicator in this role, I think we need at least two things:
1) Experimental research to determine the impact of being tuned to a different indicator. What are the consequences, especially with regard to nonresponse bias?
2) What constraints on the problem are needed? For example, do we need to set a minimum response rate constraint? Do we need to monitor multiple indicators in order to insure robust protection against nonresponse bias?
It's hard to predict what impact such a change might have on how surveys are conducted. It sounds like a daunting task, but an important one.
But any optimization problem requires that an objective function be defined. A single quantity that is to be minimized or maximized. We might argue that the current process treats the response rate as the objective function and all decisions are made with the goal of maximizing that. It's probably the cases that most survey data collections aren't fully 'optimized' in this regard, but it may be close to optimal.
If we want to optimize differently, then we still need some kind of indicator to maximize (or minimize, depending on the indicator). A recent article in Survey Practice tried several different indicators in this role using simulation. Before placing a new indicator in this role, I think we need at least two things:
1) Experimental research to determine the impact of being tuned to a different indicator. What are the consequences, especially with regard to nonresponse bias?
2) What constraints on the problem are needed? For example, do we need to set a minimum response rate constraint? Do we need to monitor multiple indicators in order to insure robust protection against nonresponse bias?
It's hard to predict what impact such a change might have on how surveys are conducted. It sounds like a daunting task, but an important one.
Comments
Post a Comment