In my last post, I talked about thinking about data collected between attempts or waves as "feedback" from sampled units. I suggested that maybe the protocol could be tailored to this feedback.
Another to express this is to say that we want to increase everyone's probabilities of response by tailoring to their feedback. Of course, we might also make the problem more complex by "tailoring" the tailoring. That is, we may want to raise the response probabilities of some individuals more than that of other individuals. If so, might we consider a technique that is more likely to succeed in that subset. I'm thinking of this as a decision problem.
For example, assume we can increase response probabilities by 0.1 for all cases with tailoring. But we notice that two different techniques have this same effect.
1) The first technique increases everyone by 0.1.
2) The second technique increases a particular subgroup (say half the population) by 0.15 and everyone else by 0.
We might prefer the latter if it reduces some other indicator for the risk of nonresponse bias more than the former. The response rate would definitely prefer the former.
Or, we might have two techniques, one has a big variance in the estimated impact for the subgroup and low variance overall and the other has low variance for the subgroup and high variance for everyone else. We might prefer the latter technique if something other than the response rate is our reward function.
Another to express this is to say that we want to increase everyone's probabilities of response by tailoring to their feedback. Of course, we might also make the problem more complex by "tailoring" the tailoring. That is, we may want to raise the response probabilities of some individuals more than that of other individuals. If so, might we consider a technique that is more likely to succeed in that subset. I'm thinking of this as a decision problem.
For example, assume we can increase response probabilities by 0.1 for all cases with tailoring. But we notice that two different techniques have this same effect.
1) The first technique increases everyone by 0.1.
2) The second technique increases a particular subgroup (say half the population) by 0.15 and everyone else by 0.
We might prefer the latter if it reduces some other indicator for the risk of nonresponse bias more than the former. The response rate would definitely prefer the former.
Or, we might have two techniques, one has a big variance in the estimated impact for the subgroup and low variance overall and the other has low variance for the subgroup and high variance for everyone else. We might prefer the latter technique if something other than the response rate is our reward function.
Comments
Post a Comment