I heard Andy Peytchev speak about responsive design recently. He raised some really good points. One of these was a "total survey error" kind of observation. He pointed out that different surveys have different objectives and that these may be ranked differently. One survey may prioritize sampling error while another has nonresponse bias as its biggest priority. As there are always tradeoffs between error sources, the priorities indicate which way those decisions were or will be made.
Since responsive design has largely been thought of as a remedy for nonresponse bias, this idea seems novel. Of course, it is worth recalling that Groves and Heeringa did originally propose the idea in a total survey error perspective. On the other hand, many of their examples were related to nonresponse.
I think it is important to 1) think about these tradeoffs in errors and costs, 2) explicitly state what they are for any given survey, and 3) formalize the tradeoffs. I'm not sure that we even usually get to step one, let alone to steps two and three.
By our training, survey methodologists ought to be able to do some of steps one and two. Operations Research and other fields such as Computer Science might be helpful for getting step three accomplished. Melania Calinescu, in her dissertation, used formal optimization methods to deal with explicitly stated objectives with respect to nonresponse and measurement errors. In some examples, she used existing data to identify designs that maximized the R-Indicator while constraining expected measurement error to certain limits.
These are tough problems to formalize, but doing so would, I think, be a real contribution to actually implementing the total survey error perspective... for responsive or any other kind of survey designs.
Since responsive design has largely been thought of as a remedy for nonresponse bias, this idea seems novel. Of course, it is worth recalling that Groves and Heeringa did originally propose the idea in a total survey error perspective. On the other hand, many of their examples were related to nonresponse.
I think it is important to 1) think about these tradeoffs in errors and costs, 2) explicitly state what they are for any given survey, and 3) formalize the tradeoffs. I'm not sure that we even usually get to step one, let alone to steps two and three.
By our training, survey methodologists ought to be able to do some of steps one and two. Operations Research and other fields such as Computer Science might be helpful for getting step three accomplished. Melania Calinescu, in her dissertation, used formal optimization methods to deal with explicitly stated objectives with respect to nonresponse and measurement errors. In some examples, she used existing data to identify designs that maximized the R-Indicator while constraining expected measurement error to certain limits.
These are tough problems to formalize, but doing so would, I think, be a real contribution to actually implementing the total survey error perspective... for responsive or any other kind of survey designs.
Comments
Post a Comment