I heard Andy Peytchev speak about responsive design recently. He raised some really good points. One of these was a "total survey error" kind of observation. He pointed out that different surveys have different objectives and that these may be ranked differently. One survey may prioritize sampling error while another has nonresponse bias as its biggest priority. As there are always tradeoffs between error sources, the priorities indicate which way those decisions were or will be made. Since responsive design has largely been thought of as a remedy for nonresponse bias, this idea seems novel. Of course, it is worth recalling that Groves and Heeringa did originally propose the idea in a total survey error perspective. On the other hand, many of their examples were related to nonresponse. I think it is important to 1) think about these tradeoffs in errors and costs, 2) explicitly state what they are for any given survey, and 3) formalize the tradeoffs. I'm not sure tha...
Blogging about survey methods, responsive design, and all things survey related.