I ran an experiment a few years ago that failed. I mentioned it in my last blog post. I reported on it in a chapter in the book on paradata that Frauke edited. For the experiment, I offered a recommended call time to interviewers. The recommendations were delivered for a random half of each interviewer's sample. They followed the recommendations at about the same rate whether they saw them or not (20% compliance). So, basically, they didn't follow the recommendations.
In debriefings, interviewers said "we call every case every time, so the recommendations at the housing unit were a waste of time." This made sense, but it also raised more questions for me.
My first question was, why don't the call records show that? Either they exaggerated when they said they call "every" case every time. Or, there is underreporting of calls. Or both.
At that point, using GPS data seemed like a good when to investigate this question. Once we started examining the GPS data, this opened up many new questions. For example, I would have thought that interviewers who travel through area segments in a straight line would be most efficient. What we saw was that interviewers don't do that much and seem to have better results the less they do that.
In any event, the failed experiment led to a whole bunch of new, interesting questions. In that sense, it wasn't such a failure.
In debriefings, interviewers said "we call every case every time, so the recommendations at the housing unit were a waste of time." This made sense, but it also raised more questions for me.
My first question was, why don't the call records show that? Either they exaggerated when they said they call "every" case every time. Or, there is underreporting of calls. Or both.
At that point, using GPS data seemed like a good when to investigate this question. Once we started examining the GPS data, this opened up many new questions. For example, I would have thought that interviewers who travel through area segments in a straight line would be most efficient. What we saw was that interviewers don't do that much and seem to have better results the less they do that.
In any event, the failed experiment led to a whole bunch of new, interesting questions. In that sense, it wasn't such a failure.
Post a Comment
ReplyDelete