Skip to main content

Posts

Showing posts from September, 2009

Calling Experiment for a Face-to-Face Survey?

I've been working on the experiment with calling strategies in a telephone survey. This was the obvious place to start since the call scheduling is done by a computerized algorithm. But I work on a lot of face-to-face surveys where the interviewer decides when to place a call. Other research has shown that interviewers are variable in their ability to successfully schedule calls. Can we help them with this problem? I'd like to try our calling experiment on a face-to-face survey. How? By delivering a a statistically-derived recommendation to the interviewer about when to call each sampled unit. On one face-to-face survey, we've successfully changed interviewer behavior by delivering recommendations about which cases to call first. I'm wondering if we can extend these results by suggesting specific times to call.

New Calling Experiment

Since the results of the experiment on call scheduling were good (with the experimental method having a slight edge over the current protocol), I've been allowed to test the experimental method against other contenders. The experimental method is described in a prior post. This month, I'm testing the experimental method which uses the predicted value for contact probabilities (MLE) across the four windows against another method which uses the Upper Confidence Bound (UCB) of the predicted probability. This quite often implies assigning a different window for calling than the experimental method. The UCB method is designed to attack your uncertainty about a case. Lai ("Adaptive Allocation and the Multi-Armed Bandit Problem," 1987) proposed the method. Other than the fact that our context (calling households to complete surveys) is a relatively short process (i.e. few pulls on the Mult-Armed Bandit), the multi-armed bandit analogy fits quite well. In my dissertation, I d...

Mired in Myopia?

Reinforcement Learning (RL) deals with multi-step decision processes. One strategy for making decisions in a multi-step environment is always choose the option that maximizes your immediate payoff. In RL, they call this strategy "myopic" since it never looks beyond the payoff for the current action. The problem is that this strategy might produce a smaller total payoff at the end of the process. If we look at the process as a whole, we may identify a sequence of actions that produces a higher overall reward while not maximizing the reward for each individual action. This all relates to an experiment that I'm running on contact strategies. The experiment controls all calls other than appointments and refusal conversion attempts. The overall contact rate was 11.6% for the experimental protocol, and 9.0% for the control group. The difference is statistically significant. But establishing contact is only an intermediate outcome. The final outcome of this multi-step process is...

An Experimental Adaptive Contact Strategy

I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea. Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample. Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assu...

First Post

I'm setting up this blog in order to post about my ongoing research, as well as on ideas for future research. I'm hoping to blog weekly to start (Thursdays being a good day to post a blog). I'll start tomorrow.