I read this very interesting blog post by Peter Lugtig yesterday. The slides from the talk he describes are also linked to the post. He builds on an analysis of classes of nonresponders. Several distinct patterns of nonresponse are identified. The characteristics of persons in each class are then described. For example, some drop out early, some "lurk" around the survey, some stay more or less permanently.
He suggests that it might be smart to identify design features that are effective for each of the groups and then tailor these features to the subgroups in an adaptive design. This makes a lot of sense. And panel studies are an attractive place to start doing this kind of work. In the panel setting, there is a lot more data available on cases. This can help in identifying subgroups. And, with repeated trials of the protocol, it may be possible to improve outcomes (response) over time.
I think the hard part is creating the groups. This reminds me of a problem that I read about years ago in determining a policy for catalog mailings. The groupings need to be homogenous with respect to the impact of the feature we intend to use. For example, if we decide to mail an off-wave newsletter, we want that feature to improve the probability of response for everyone in the group and not anyone outside the group. Or as nearly so as possible.
We have the additional problem that we also want the ones we recruit to help us improve the quality of estimates. Of course, they reduce sampling error. It would be nice if they might also reduce the bias of adjusted estimates. It's a bit harder to judge when that happens.
He suggests that it might be smart to identify design features that are effective for each of the groups and then tailor these features to the subgroups in an adaptive design. This makes a lot of sense. And panel studies are an attractive place to start doing this kind of work. In the panel setting, there is a lot more data available on cases. This can help in identifying subgroups. And, with repeated trials of the protocol, it may be possible to improve outcomes (response) over time.
I think the hard part is creating the groups. This reminds me of a problem that I read about years ago in determining a policy for catalog mailings. The groupings need to be homogenous with respect to the impact of the feature we intend to use. For example, if we decide to mail an off-wave newsletter, we want that feature to improve the probability of response for everyone in the group and not anyone outside the group. Or as nearly so as possible.
We have the additional problem that we also want the ones we recruit to help us improve the quality of estimates. Of course, they reduce sampling error. It would be nice if they might also reduce the bias of adjusted estimates. It's a bit harder to judge when that happens.
Comments
Post a Comment