I've been working on this paper for a while. It compares models estimated in the middle of data collection with those estimated at the end of data collection. It points out that these daily models may be vulnerable to biased estimates akin to the "early vs. late" dichotomy that is sometimes used to evaluate the risk of nonresponse bias.The solution is finding the right prior specification in a Bayesian setup or using the right kind and amount of data from a prior survey so that estimates will have sufficient "late" responders.
But, I did manage to manufacture this figure which shows the estimates from the model fit each day with the data available that day ("Daily") and the model fit at the end of data collection ("Final"). The daily model is overly optimistic early. For this survey, there were 1,477 interviews. The daily model predicted there would be 1,683. The final model predicted 1,477.
That's the average "optimism." That "optimism" might be bad if I think that things are going better than they actually are. On the other hand, if all I cared about was the ranking of cases into "high," "medium," and "low" probability, then this optimistic bias might not matter... but then again, it might.
But, I did manage to manufacture this figure which shows the estimates from the model fit each day with the data available that day ("Daily") and the model fit at the end of data collection ("Final"). The daily model is overly optimistic early. For this survey, there were 1,477 interviews. The daily model predicted there would be 1,683. The final model predicted 1,477.
That's the average "optimism." That "optimism" might be bad if I think that things are going better than they actually are. On the other hand, if all I cared about was the ranking of cases into "high," "medium," and "low" probability, then this optimistic bias might not matter... but then again, it might.
Comments
Post a Comment