Skip to main content

Posts

Showing posts from October, 2012

Baby and the Bathwater

This post is a follow-up on my last. Since my last post, I came across an interesting article at Survey Practice. I'm really pleased to see this article, since this is a discussion we really need to have. The article , by Koen Beullens and Geert Loosveldt, presents the results of a simulation study on the impact of using different indicators to govern data collection. In other words, the simulate the consequences of maximizing different indicators in data collection. The three indicators are the response rate, the R-Indicator (Schouten, et al., 2009), and the maximal bias (also developed by Schouten et al. 2009). The simulation shows a situation where you would get a different result from maximizing either of the latter two indicators compared to when you maximize the response rate. Maximizing the R-Indicator, for example, led to a slightly lower response rate than the data collection strategy that maximizes the response rate. This is an interesting simulation. It pretty cl...

Do you really believe that?

I had an interesting discussion with someone at a conference recently. We had given a presentation that included some discussion of how response rates are not good predictors of when nonresponse bias might occur. We showed a slide from Groves and Peytcheva . Afterwards, I was speaking with someone who was not a survey methodologist. She asked me if I really believed that response rates didn't matter. I was a little taken aback. But as we talked some more, it became clear that she was thinking that we were trying to argue for achieving low response rates. I thought it was interesting that the argument could be perceived that way. To my mind, the argument wasn't about whether we should be trying to lower response rates. It was more about what tools we should be using to diagnose the problem. In the past, the response rate was used as a summary statistic for discussing nonresponse. But the evidence from Groves and Peytcheva calls into question the utility of that single statis...

Estimating effort in field surveys

One of the things that I miss about telephone surveys is being able to accurately estimate how much various activities cost or even how long each call takes. Since everyone works on a centralized system on telephone surveys, and everything gets time-stamped, you can calculate how long calls take. It's not 100% accurate -- weird things happen (someone takes a break and it doesn't show up in the data, networks collapse, etc.) but usually you can get pretty accurate estimates. In the field, the interviewers tell us what they did and when. But they have to estimate how many hours each subactivity (travel, production, administration) takes, and they don't give anything at the call level. I've been using regression models to estimate how long each call takes in field studies. The idea is pretty simple, regress the hours worked in a week on the counts of the various types of calls made that week. The estimated coefficients are the estimate of the average time each type of ...

Call Scheduling in Cluster Samples

A couple of years ago, I tried to deliver recommended times to call housing units to interviewers doing face-to-face interviewing in an area probability sample. Interviewers drive to sampled area segments and then visit several housing units while they are there. This is how cost savings are achieved. The interviewers didn't use the recommendations -- we had experimental evidence to show this. I had thought maybe the recommendations would help them organize their work. In talking with them afterwards, they didn't see the utility since they plan trips to segments, not single housing units. I decided to try something simpler. To make sure that calls are being made at different times of day, identify segments that have not been visited in all call windows, or have been visited in only one call window. This information might help interviewers schedule trips if they haven't noticed that this situation had occurred in a segment. If this helpful, then maybe this recommendation...