Skip to main content

Posts

Showing posts from January, 2015

Margin of Error

There was a debate held yesterday on "Margin of Error" in the presence of nonresponse and using non-probability samples. This is an interesting and useful discussion. In the best of circumstances, "margin of error" represents the sampling error associated with an estimate. Unfortunately, other matters often... errrrr... always interfere. The sampling mechansim is not easily identified or modeled in the case of nonprobability samples. In the case of probability samples, the nonresponse mechanism has to be modeled. Either of these situations involve some model assumptions (untestable) that are required to motivate the estimation of a margin of error. One step forward would be for people who report estimated "margins of error" to reveal all of their assumptions in their  models (weighting models for nonresponse or, in the case of nonprobability samples, selection) and describe the sampling and recruitment mechanisms sufficiently such that others can evalu

Adaptive Design and Panel Surveys

I read this very interesting blog post by Peter Lugtig yesterday. The slides from the talk he describes are also linked to the post. He builds on an analysis of classes of nonresponders. Several distinct patterns of nonresponse are identified. The characteristics of persons in each class are then described. For example, some drop out early, some "lurk" around the survey, some stay more or less permanently. He suggests that it might be smart to identify design features that are effective for each of the groups and then tailor these features to the subgroups in an adaptive design. This makes a lot of sense. And panel studies are an attractive place to start doing this kind of work. In the panel setting, there is a lot more data available on cases. This can help in identifying subgroups. And, with repeated trials of the protocol, it may be possible to improve outcomes (response) over time. I think the hard part is creating the groups. This reminds me of a problem that I read

Mixed-Mode Surveys: Nonresponse and Measurement Errors

I've been away from the blog for a while, but I'm back. One of the things that I did during my hiatus from the blog was to read papers on mixed-mode surveys. In most of these surveys, there are nonresponse biases and measurement biases that vary across the modes. These errors are almost always confounded. An important exception is Olson's paper . In that paper, she had gold standard data that allowed her to look at both error sources. Absent those gold standard data, there are limits on what can be done. I read a number of interesting papers, but my main conclusion was that we need to make some assumptions in order to motivate any analysis. For example, one approach is to build nonresponse adjustments for each of the modes, and then argue that any differences remaining are measurement biases. Without such an assumption, not much can be said about either error source. Experimental designs certainly strengthen these assumptions, but do not completely unconfound the sources