## Posts

Showing posts from August, 2014

### Probability Sampling

In light of the recent kerfuffle over probability versus non-probability sampling, I've been thinking about some of the issues involved with this distinction. Here are some thoughts that I use to order the discussion in my own head:

1. The research method has to be matched to the research question. This includes cost versus quality considerations. Focus groups are useful methods that are not typically recruited using probability methods. Non-probability sampling can provide useful data. Sometimes non-probability samples are called for.

2. A role for methodologists in the process is to test and improve faulty methods. Methodologists have been looking at errors due to nonresponse for a while. We have a lot of research for using models to reduce nonresponse bias. As research moves into new arenas, methodologists have a role to play there. While we may (er... sort of) understand how to adjust for nonresponse, do we know how to adjust for an unknown probability of getting into an onlin…

### The Dual Criteria for a Useful Survey Design Feature

I've been working on a review of patterns of nonresponse to a large survey on which I worked. In my original plan, I looked at things that are related response, and then I looked at things that are related to key statistics produced by the survey. "Things" include design features (e.g. number of calls, refusal conversions, etc.) and paradata or sampling frame data (e.g. Census Region, interviewer observations about the sampled unit, etc.).

We found that there were some things that heavily influenced response (e.g. calls) that did not influence the key statistics. Good, since more or less of that feature, although important for sampling error, doesn't seem important with respect to nonresponse bias.

There were also some that influenced the key statistics but not response. For example, interviewer observations we have for the study. The response rates are close across subgroups of these estimates. As a result, I won't have to rely on large weights to get to unbiase…

### "Go Big, or Go Home."

I just got back from JSM, where I participated in a session on adaptive design. Mick Couper served as a discussant for the session. The title of this blog post is one of the points from his talk. He said that innovative, adaptive methods need to show substantial results. Otherwise, it won't be convincing. As he pointed out, part of the problem is that we are often tinkering with marginal changes on existing surveys. These kinds of changes need to be low risk, that is, they can't cause damage to the results and should only help. However, these kinds of changes are often limited in what they can do. His point was to make some big changes that will show big effects may require some risk.

This made sense to me. It would be nice to have some methodological studies that aren't constrained by the needs of an existing survey. I suppose this could be a separate, large sample with the same content as an existing survey. However, I wonder if this is a chicken or egg type of problem. …