The normal strategy for a publicly-released dataset is for the data collector to impute item missing values and create a single weight that accounts for probability of selection, nonresponse, and noncoverage. This weight is constructed under a model that needs to be appropriate across every statistics that could be published from these data. The model needs to be robust, and may be less efficient for some analyses.
More efficient analyses are possible. But in order to do that, data users need more data. They need data for nonresponders. In some cases, they may need paradata on both responders and nonresponders. At the moment, one of the few surveys that I know of that is releasing these data is the NHIS. The European Social Survey is another. Are there others?
Of course, not everyone is going to be able to use these data. And, in many cases, it won't be worth the extra effort. But it does seem like there is a mismatch between the theory and practice in this case.
Not only would the release of these data allow users to construct their own adjustments (weighting or imputation), it would also allow them to compare their strategy to that of the data producer. This sounds like healthy competition.
More efficient analyses are possible. But in order to do that, data users need more data. They need data for nonresponders. In some cases, they may need paradata on both responders and nonresponders. At the moment, one of the few surveys that I know of that is releasing these data is the NHIS. The European Social Survey is another. Are there others?
Of course, not everyone is going to be able to use these data. And, in many cases, it won't be worth the extra effort. But it does seem like there is a mismatch between the theory and practice in this case.
Not only would the release of these data allow users to construct their own adjustments (weighting or imputation), it would also allow them to compare their strategy to that of the data producer. This sounds like healthy competition.
Comments
Post a Comment