Friday, January 31, 2014

Tracking Experiment

I've been blogging about the dearth of experiments into methods for tracking. It can be hard to do when there are big differences in costs and effectiveness among steps. But when at least some are close in cost, it's more difficult to assume that one order is better than another. I liked the paper by Koo and colleagues since it actually experimented with which service to use for searching and found a specific order that worked better.

I'm now working on a project that uses tracking. We decided to use different orderings of the steps with different groups. Not a perfect experiment, but the grouping are relatively homogenous so it won't be a huge step to infer from the results to a broader population. We'll have some results.... in a few months.

Friday, January 24, 2014

Tracking, Again

So I finished reading a large number of studies on tracking. One thing that I noticed, there is a general assumption that you should start with cheaper methods and go to more expensive. But that might not always be true. For instance, what if a cheap method almost never returns a result, while something more expensive produces more leads. I could imagine skipping the cheap step, or putting it after the expensive step.

In any event, it is really a sequence of steps that needs to be optimized. How to do this involves both the costs and the expected returns. But since each of those are only known conditionally upon whatever was done prior to the current step, we need experiments that vary the order of the steps to find out what the optimal step is going to be.

Saturday, January 18, 2014

Tracking Research: A Lack of Experimental Studies

I've been reading a number of papers on tracking (aka tracing or locating) of panel members in longitudinal research. Many of the papers are case studies, reporting on what particular studies did. Very few actually conduct experiments.

Survey methodologists have produced a few recent experimental papers. Research on HRS showed that higher incentives had persistent effects on response at later waves. McGonagle and colleagues looked at the effects of between-wave contact methods and incentives. Fumagelli and colleagues  also explore between wave contact methods.

These experiments all involve contacting panel members. I found one interesting paper that actually experimented with the order of the steps in the tracking process. Usually, the order starts with the cheapest things to do and goes to the more expensive. If steps have a similar cost, then just choose an order. This paper by Koo et al actually randomized the order of the steps (two different websites). A haven't seen any other papers like this one. It's something that I think would be fun and useful with which to experiment.

Friday, January 10, 2014

Tracking Costs

As I mentioned in my last post, I have been reading an enormous number of papers on locating respondents in panel studies. One interesting thing that I have found is that tracking costs are often described in a manner different than I would have expected. I'm used to thinking of the costs of activities -- telephone calls, internet searches, face-to-face calls, etc. These activity costs can be summed up to total costs, and then averaged over number of cases located or number of cases interviewed.

I found a lot of papers reported costs as FTEs. This seemed a lot simpler. I found one review paper that summarized several other studies. They reported all the results as FTEs. This was nifty in that it was simple, and somewhat impervious to inflation and differences in pay rates -- so better than reporting dollar costs.

The downside is that the costs can't be rescaled when there are differences among panels in difficulty of being tracked. Some are more difficult and require more effort (calls, searches, etc.). It's difficult to revise an FTE-based estimate to account for these differences. An estimate based on effort, however, can be rescaled for expected changes in effort (not that it is easy to generate the right expectations about effort).