Skip to main content

Posts

Showing posts from July, 2014

Responsive Design and Uncertainty

To my mind, a key reason for responsive designs is uncertainty. This uncertainty can probably occur in at least two ways. First, at a survey level, I can be uncertain about what response rate a certain protocol can elicit. If I don't obtain the expected response rate after applying the initial protocol, then I can change the protocol and try a different one. Second, I can be uncertain about which protocol to apply at the case level. But I know what the protocol will be after I have observed a few initial trials of some starting protocol. For example, I might call a case three times on the telephone with no contact before I conclude that I should attempt the case face-to-face. In either situation, I'm not certain about which protocol specific cases will get. But I do have a pre-specified plan that will guide my decisions during data collection. There is a difference, though, in that in the latter situation (case level), I can predict that a proportion of cases will receive t

Classification Problems with Daily Estimates of Propensity Models

A few years ago, I ran several experiments with a new call-scheduling algorithm. You can read about it here . I had to classify cases based upon which call window would be the best one for contacting them. I had four call windows. I ranked them in order, for each sampled number, from best to worst probability of contact. The model was estimated using data from prior waves of the survey (cross-sectional samples) and the current data. For a paper that will be coming out soon, I looked at how often these classifications changed when you used the final data compared to the interim data. The following table shows the difference in the two rankings: Change In Ranking Percent 0 84.5 1 14.1 2 1.4 3 0.1 It looks like the rankings didn't change much. 85% were the same. 14% changed one rank. What is difficult to know is what difference these classification errors might make in the o

Responsive Design is not just Two-Phase Sampling

I recently gave, along with Brady West, a short course on paradata and responsive design. We had a series of slides on what is "responsive design." I had a slide with a title similar to that of this post. I think it was "Responsive Design is not equal to Two-Phase Sampling." I sometimes have a discussion with people about using "responsive design" on their survey, but I get the sense that what they really want to know about is two-phase sampling for nonresponse. In fact, two-phase sampling, to be efficient, should have different cost structures across the phases. But the requirements for a responsive design are higher than that. Groves and Heeringa also argued that the phases should have 'complementary' design features. That is, each phase should be attractive to different kinds of sampled people. The hope is that nonresponse biases of prior phases are cancelled out by the biases of subsequent phases. Further, responsive designs can exist wit