One aspect of responsive design that hasn't really been considered is when to stop collecting data. Groves and Heeringa (2006) argue that you should change your data collection strategy when it ceases to bring in interviews that change your estimate. But when should you stop? It seems like the answer to the question should be driven by some estimate of the risk of nonresponse bias. But given that the response rate appears to be a poor proxy measure for this risk, what should we do? Rao, Glickman and Glynn proposed a rule for binary survey outcome variables. Now, Raghu and I have an article accepted at Statistics in Medicine that proposes a rule that uses imputation methods and recommends stopping data collection when the probability that additional data (i.e. more interviews) will change your estimate is sufficiently small. The rule is for normally distributed data. The rule is discussed in my dissertation as well.
Blogging about survey methods, responsive design, and all things survey related.