Skip to main content

Posts

Showing posts from December, 2009

Which protocol?

A new article by Peytchev, Baxter, and Carley-Baxter outlines reasoning for altering the survey protocol in midstream in order to bring in new types of respondents, as opposed to applying the same protocol and bringing in more of the same. Responsive design ( Groves and Heeringa , 2006) is built around a similar reasoning. I think it's probably not uncommon for survey organizations to use the same protocol over and over. It shouldn't be surprising that this approach generally brings in "more of the same." But if the response rate is the guiding metric, then such considerations aren't relevant. Under the response rate, it's not who you interview, but how many interviews you get. In other words, the composition of the respondent pool is irrelevant as long as you hit your response rate target. As the authors note, however, there is much more to be done in terms of determining the appropriate protocol for each particular situation -- assuming that simply maxi

Call Scheduling Issue

One of the issues that I'm facing in my experiment with call scheduling on the telephone survey is the decision to truncate effort. Typically, we have a policy that says something like call a case 12 times in 3 different call windows (6 in one, 4 in another, and 2 in the last). Those calls must occur on 12 different days. If those calls are made and none of them achieve contact (including an answering machine), we assume that further effort will not produce any result. We finalize the case as a Noncontact. We call this our "grid" procedure (since the paper coversheets that we use to use tracked the procedure in a grid). It counts against AAPOR RR2. A portion (the famous "e") of each such case counts against AAPOR RR4. My algorithm did not regard this algorithm. Assuming the model favored one window every day, then the requirements of the grid would never be met. It sounds to me like a failure to sufficiently explore other policies, but it could happen. In an