Skip to main content

Posts

Showing posts from June, 2014

Formalizing the Optimization Problem

I heard Andy Peytchev speak about responsive design recently. He raised some really good points. One of these was a "total survey error" kind of observation. He pointed out that different surveys have different objectives and that these may be ranked differently.  One survey may prioritize sampling error while another has nonresponse bias as its biggest priority. As there are always tradeoffs between error sources, the priorities indicate which way those decisions were or will be made.

Since responsive design has largely been thought of as a remedy for nonresponse bias, this idea seems novel. Of course, it is worth recalling that Groves and Heeringa did originally propose the idea in a total survey error perspective. On the other hand, many of their examples were related to nonresponse.

I think it is important to 1) think about these tradeoffs in errors and costs, 2) explicitly state what they are for any given survey,  and 3) formalize the tradeoffs. I'm not sure that w…

"Failed" Experiments

I ran an experiment a few years ago that failed. I mentioned it in my last blog post. I reported on it in a chapter in the book on paradata that Frauke edited. For the experiment, I offered a recommended call time to interviewers. The recommendations were delivered for a random half of each interviewer's sample. They followed the recommendations at about the same rate whether they saw them or not (20% compliance). So, basically, they didn't follow the recommendations.

In debriefings, interviewers said "we call every case every time, so the recommendations at the housing unit were a waste of time." This made sense, but it also raised more questions for me.

My first question was, why don't the call records show that? Either they exaggerated when they said they call "every" case every time. Or, there is underreporting of calls. Or both.

At that point, using GPS data seemed like a good when to investigate this question. Once we started examining the GPS dat…

Setting an Appointment for Sampled Units... Without their Assent

Kreuter, Mercer, and Hicks have an interesting article in JSSAM. In a panel study, the Medical Expenditure Panel Survey (MEPS). They note my failed attempt to deliver recommended calling times to interviewers. They had a nifty idea... preload the best time to call as an appointment. Letters were sent to the panel members announcing the appointment.

Good news. This method improved efficiency without harming response rates. There was some worry that setting appointments without consulting the panel members would turn them off, but that didn't happen.

It does remind me of another failed experiment I did a few years ago. Well, there wasn't an experiment, just a design change. We decided that it would be good to leave answering machine messages on the first telephone call in an RDD sample. In the message, we promised that we would call back the next evening at a specified time. Like an appointment. Without experimental evidence, it's hard to say, but it did seem to increase con…

Costs of Face-to-Face Call Attempts

I've been working on an experiment where evaluating cost savings is an important outcome. It's difficult to measure costs in this environment. Timesheets and call records are recorded separately. It's difficult to parse out the travel time from other time.

One study actually shadowed a subset of interviewers in order to generate more accurate cost estimates. This is an expensive means to evaluate costs that may not be practical in many situations.

It might be that increasing computerization does away with this problem. In a telephone facility, everything is timestamped so we can calculate how long most call attempts take. It might be that we will be able to do this in face-to-face studies soon/already.