Skip to main content

Posts

Showing posts from November, 2016

The Cost of a Call Attempt

We recently did an experiment with incentives on a face-to-face survey. As one aspect of the evaluation of the experiment, we looked at the costs associated with each treatment (i.e. different incentive amounts).

The costs are a bit complicated to parse out. The incentive amount is easy, but the interviewer time is hard. Interviewers record their time for at the day level, not at the housing unit level. So it's difficult to determine how much a call attempt costs.

Even if we had accurate data on the time spent making the call attempt, there would still be all the travel time from the interviewer's home to the area segment. If I could accurately calculate that, how would I spread it across the cost of call attempts? This might not matter if all I'm interested in is calculating the marginal cost of adding an attempt to a visit to an area segment. But if I want to evaluate a treatment -- like the incentive experiment -- I need to account for all the interviewer costs, as best…

Methodology on the Margins

I'm thinking again about experiments that we run. Yes, they are usually messy. In my last post, I talked about the inherent messiness of survey experiments that is due to the fact that surveys have many design features to consider. And these features may interact in ways that mean we can't simply pull out an experiment on a single feature and generalize the result to other surveys.

But I started thinking about other problems we have with experiments. I think another big issue is that methodological experiments are often run as "add-ons" to larger surveys. It's hard to obtain funding to run a survey just to do a methodological experiment. So, we add our experiments to existing surveys.

The problem is that this approach usually creates a limitation. The experiments can't risk creating a problem for the survey. In other words, they can't lead to reductions in response rates or threaten other targets that are associated with the main (i.e. non-methodological)…

Messy Experiments

I have this feeling that survey experiments are often very messy. Maybe it's just in comparison to the ideal type -- a laboratory with a completely controlled environment where only one variable is altered between two randomly assigned groups.

But still, surveys have a very complicated structure. We often call this the "essential survey conditions." But that glib phrase might hide some important details. My concern is that when we focus on a single feature of a survey design, e.g. incentives, we might come to the wrong conclusion if we don't consider how that feature interacts with other design features.

This matters when we attempt to generalize from published research to another situation. If we only focus on a single feature, we might come to the wrong conclusion. Take the well-known result -- incentives work! Except that the impact of incentives seems to be different for interviewer-administered surveys than for self-administered surveys. The other features of th…