Friday, May 25, 2012

The Relative Value of Paradata and Sampling Frame Data

In one of my favorite non-survey articles, Rossi and colleagues looked at the relative value of purchase history data and demographic information in predicting the impact of coupons with different values. The purchase history data was more valuable in the prediction.

I believe a similar situations applies to surveys, at least in some settings. That is, paradata might be more valuable than sampling frame data. Of course, many of the surveys that I work on have very weak data on the sampling frame.

In any event, I fit random intercept logistic regression models predicting contact that include some sampling frame data from an RDD survey. The sampling frame data are generally neighborhood characteristics. I recently made this chart, which shows the predicted vs observed contact rates for households in a particular time slot (call window). The dark circles are the predictions by observed values (household contact rates) for the multi-level model. I also fit a marginal logistic regression model. The light gray squares are the predicted by observed values from this model.

It's pretty clear that the sampling frame information does not help differentiate cases in terms of contactibility. That is, the light gray squares are barely differentiated. Cases with high observed contact rates have about the same predicted contact rate as cases with low observed contact rates.

But the random intercept model has much better fit. That is, cases with high observed values (contact rates) also have high predicted values. This graph includes many cases that have fewer than 5 calls on which to base these predictions. It does even better when we have large samples for each household.

To me, in this case, the call records are much more valuable than the sampling frame data.

Friday, May 4, 2012

Imputation and the Impact of Nonresponse


I've been thinking about the evaluation of the risk of nonresponse bias a bit lately. Imputation seems to be a natural way to evaluate those risks. In my setup, I impute the unit nonresponders. Then I can use imputation to evaluate the value of the data that I observed (a retrospective view) and to predict the value of different parts of the data that I did not/have not yet observed (a prospective view).

Allow me to use a little notation. Y_a is a matrix of observed data collected under protocol a. Y_b is a matrix of observed data collected under protocol b. Y_m is the matrix of data for the nonresponders. It's missing. I could break Y_m into two pieces: Y_m1 and Y_m2.


1) Retrospective. I can delete data that I observed and impute the values plus all the other missing values (i.e. the unit nonresponse). I can impute Y_b and Y_m conditional on Y_a. I can also impute Y_m conditional on Y_b and Y_a. It might be interesting to compare the estimates from these two procedures to see if protocol b has added much.

2) Prospective. In this case I can use a nested imputation procedure to predict the impact of each piece on the fraction of missing information (see Harel and Stratton, 2009). If I impute Y_m1 conditional on Y_a and Y_b, and then Y_m2 conditional on Y_a, Y_b, and Y_m1, I can then break the estimated FMI into components due to Y_m1 and Y_m2. In this way I can predict which cases are more valuable in the sense of contributing more to the information.

Followers