Friday, October 31, 2014

Happy Halloween!

OK. This actually a survey-related post. I read this short article about an experiment where some kids got a candy bar and other kids got a candy bar and a piece of gum. The latter group was less happy. Seems counter-intuitive, but in the latter group, the "trajectory" of the qaulity of treats is getting worse. Turns out that this is a phenomenon that other psychologists have studied.

This might be a potential mechanism to explain why sequence matters in some mixed-mode studies. Assuming that other factors aren't confounding the issue.

Friday, October 24, 2014

Quantity becomes Quality

A big question facing our field is whether it is better to adjust data collection or do post-data collection adjustments to the data in order to reduce nonresponse bias. I blogged about this a few months ago. In my view, we need to do both.

I'm not sure how the argument goes that says we only need to adjust at the end. I'd like to hear more of that. In my mind, it must be an assumption that once you condition on the frame data, the biases disappear and that assumption is valid at all points during the data collection. That must be a caricature -- which is why I'd like to hear more of the argument from a proponent of the view.

In my mind, that assumption may or may not be true. That's an empirical question. But it seems likely that at some point in the process of collecting data, particularly early on, that assumption is not true. That is, the data are NMAR, even when I condition on all my covariates (sampling frame and paradata). Put another way, in a cell adjustment framework, responders and nonresponders within cells have different means.

At some point, however, there may be a shift. As the data accumulate (quantitative change), the mechanism may shift (qualitative change) from NMAR to MAR (or less NMAR, errr, if there is such a thing). I think that must be an empirical question. It would be nice to have some gold standard studies to understand this.

I further speculate that such a shift (from NMAR to MAR) is more likely to occur in a controlled process than in a relatively uncontrolled one. I say that because I have been thinking about adaptive design as an attempt to place control on a process with a lot of variation, much of it coming from interviewers.

Friday, October 17, 2014

Decision Support and Interviewer Compliance

When I was working on my dissertation, I got interested in a field of research known as decision support. They use technical systems to help people make decisions. These technical systems help to implement complex algorithms (i.e. complicated if... then decision rules) and may include real-time data analysis. One of the reasons I got interested in this area was because I was wondering about implementing complicated decision algorithms (e.g. highly tailored, including to incoming paradata) in the field.

One of the problems associated with decision support has to do with compliance. Fortunately, Kawamoto and colleagues did a nifty systematic review of the literature to see what factors were related to compliance in a clinical setting. These features might be useful in a survey setting as well. They are:
1. The decision support should be part of the workflow.
2. It should deliver recommendations not just information.
3. The support should be delivered at the time the decision is made.
4. It should be computerized.

I had a couple of experiments that delivered recommendations to interviewers where the interviewers failed to follow these recommendations. One hypothesis about these failures might be that I failed to deliver the recommendation following the four principles from Kawamoto et al.

Friday, October 10, 2014

Training Works... Until it Doesn't

I recently had need for several citations showing that training interviewers works. Of course, Fowler and Mangione show that training can improve interviewer performance in delivering a questionnaire. Groves and McGonagle also show that training can have an impact on cooperation rates.

But then I also thought of the example from Campanelli and colleagues where experience interviewers preferred to make call attempts during the day -- when these attempts would be less successful and despite training that other times would work better.

So, an interesting question, when does training work? And when does it not?

Friday, October 3, 2014

Sensitivity Analysis and Nonresponse Bias

For a while now, when I talk about the risk of nonresponse bias, I suggest that researchers look at the problem from as many different angles as possible, employing varied assumptions. I've also pointed to work by Andridge and Little that uses proxy pattern-mixture models and a range of assumptions to do sensitivity analysis. In practice, these approaches have been rare.

A couple of years ago, I saw a presentation at JSM that discussed a method for doing sensitivity analyses for binary outcomes in clinical trials with two treatments. The method they proposed was graphical and seemed like it would be simple to implement. An article on the topic has now come out. I like the idea and think it might have applications in surveys. All we need are binary outcomes where we are comparing two groups. It seems that there are plenty of those situations.