Skip to main content

Posts

Showing posts from October, 2013

Keeping track of the costs...

I'm really enjoying this article   by Andresen and colleagues on the costs and errors associated with tracking (locating panel members). They look at both sides of the problem. I think that is pretty neat. There was one part of the article that raised a question in my mind. On page 46, they talk about tracking costs. They say "...[t]he average tracing costs per interview for stages 1 and 2 were calculated based on the number of tracing activities performed at each stage." An assumption here -- I think -- is that each tracing activity (they list 6 different manual tracing activities) takes the same amount of time. So take the total time from the tracing team, and divide it by the number of activities performed, and you have the average time per activity. This is perfectly reasonable and fairly robust. You might do better with a regression model predicting hours from the types and numbers of activities performed in a week. Or you might ask for more specific information

Were we already adaptive?

I spent a few posts cataloging design features that could be considered adaptive. No one labelled them that way in the past. But if we were already doing it, why do we need the new label? I think there are at least two answers to that: 1. Thinking about these features allows us to bring in the complexity of surveys. Surveys are multiple phase activities, where the actions at different phases may impact outcomes at later phases. This makes it difficult to design experiments. Clinical trials, some have labelled this phenomenon as " practice misalignments ." They note that trials that focus on single-phase, fixed-dose treatments are not well aligned with how doctors actually treat patients. The same thing may happen for surveys. When something doesn't work, we don't usually just give up. We try something else. 2. It gives us a concept to think about these practices. It is an organizing principle that can help identify common features, useful experimental me

Panel Studies as a Place to Explore New Designs

I really enjoyed this paper by Peter Lynn on targeting cases for different recruitment protocols. He makes a solid case for treating cases unequally, with the goal of equalizing response probabilities across subgroups. It also includes several examples from panel surveys. I strongly agree that panel surveys are a fertile ground for trying out new kinds of designs. They have great data and there is a chain of interactions between the survey organization and the panel member. This is more like the adaptive treatment setting that Susan Murphy and colleagues have been exploring. I believe that panel surveys may be a fertile ground for bringing together ideas about adaptive treatment regimes and survey design.

Persuasion Letters

This is a highly tailored strategy. The idea is that certain kinds of interviewer observations about contact with sampled households will be used to tailor a letter that is sent to the household. For example, if someone in the household says they are "too busy" to complete the survey, a letter is sent that specifically addresses that concern. It's pretty clear that this is adaptive. But here again, thinking about it as an adaptive feature could improve a) our understanding of the technique, and b) -- at least potentially -- its performance. In practice, interviewers request that these letters be sent. There is variability in the rules they use about when to make that request. This could be good or bad. It might be good if they use all of the "data" that they have from their contacts with the household. That's more data than the central office has. On the other hand, it could be bad if interviewers vary in their ability to "correctly" identify c