Friday, June 27, 2014

Formalizing the Optimization Problem

I heard Andy Peytchev speak about responsive design recently. He raised some really good points. One of these was a "total survey error" kind of observation. He pointed out that different surveys have different objectives and that these may be ranked differently.  One survey may prioritize sampling error while another has nonresponse bias as its biggest priority. As there are always tradeoffs between error sources, the priorities indicate which way those decisions were or will be made.

Since responsive design has largely been thought of as a remedy for nonresponse bias, this idea seems novel. Of course, it is worth recalling that Groves and Heeringa did originally propose the idea in a total survey error perspective. On the other hand, many of their examples were related to nonresponse.

I think it is important to 1) think about these tradeoffs in errors and costs, 2) explicitly state what they are for any given survey,  and 3) formalize the tradeoffs. I'm not sure that we even usually get to step one, let alone to steps two and three.

By our training, survey methodologists ought to be able to do some of steps one and two. Operations Research and other fields such as Computer Science might be helpful for getting step three accomplished. Melania Calinescu, in her dissertation, used formal optimization methods to deal with explicitly stated objectives with respect to nonresponse and measurement errors. In some examples, she used existing data to identify designs that maximized the R-Indicator while constraining expected measurement error to certain limits.

These are tough problems to formalize, but doing so would, I think, be a real contribution to actually implementing the total survey error perspective... for responsive or any other kind of survey designs.

Friday, June 20, 2014

"Failed" Experiments

I ran an experiment a few years ago that failed. I mentioned it in my last blog post. I reported on it in a chapter in the book on paradata that Frauke edited. For the experiment, I offered a recommended call time to interviewers. The recommendations were delivered for a random half of each interviewer's sample. They followed the recommendations at about the same rate whether they saw them or not (20% compliance). So, basically, they didn't follow the recommendations.

In debriefings, interviewers said "we call every case every time, so the recommendations at the housing unit were a waste of time." This made sense, but it also raised more questions for me.

My first question was, why don't the call records show that? Either they exaggerated when they said they call "every" case every time. Or, there is underreporting of calls. Or both.

At that point, using GPS data seemed like a good when to investigate this question. Once we started examining the GPS data, this opened up many new questions. For example, I would have thought that interviewers who travel through area segments in a straight line would be most efficient. What we saw was that interviewers don't do that much and seem to have better results the less they do that.

In any event, the failed experiment led to a whole bunch of new, interesting questions. In that sense, it wasn't such a failure.

Friday, June 13, 2014

Setting an Appointment for Sampled Units... Without their Assent

Kreuter, Mercer, and Hicks have an interesting article in JSSAM. In a panel study, the Medical Expenditure Panel Survey (MEPS). They note my failed attempt to deliver recommended calling times to interviewers. They had a nifty idea... preload the best time to call as an appointment. Letters were sent to the panel members announcing the appointment.

Good news. This method improved efficiency without harming response rates. There was some worry that setting appointments without consulting the panel members would turn them off, but that didn't happen.

It does remind me of another failed experiment I did a few years ago. Well, there wasn't an experiment, just a design change. We decided that it would be good to leave answering machine messages on the first telephone call in an RDD sample. In the message, we promised that we would call back the next evening at a specified time. Like an appointment. Without experimental evidence, it's hard to say, but it did seem to increase contact rates slightly in this ongoing survey. However, the telephone facility hated it! It stacked up the calling algorithm with tons of appointments. That was the "failure" that I mentioned earlier.

Friday, June 6, 2014

Costs of Face-to-Face Call Attempts

I've been working on an experiment where evaluating cost savings is an important outcome. It's difficult to measure costs in this environment. Timesheets and call records are recorded separately. It's difficult to parse out the travel time from other time.

One study actually shadowed a subset of interviewers in order to generate more accurate cost estimates. This is an expensive means to evaluate costs that may not be practical in many situations.

It might be that increasing computerization does away with this problem. In a telephone facility, everything is timestamped so we can calculate how long most call attempts take. It might be that we will be able to do this in face-to-face studies soon/already.

Followers