Maybe I'm just being cranky, but I'm starting to think we need to be more careful about when we use the term "nonresponse bias." It's a simple term, right? What could be wrong here? The situation that I'm thinking about is when we are comparing responders and nonresponders on characteristics that are known for everyone. This is a common technique. It's a good idea. Everyone should do this to evaluate the quality of the data. My issue is when we start to describe the differences between responders and nonresponders on these characteristics as "nonresponse bias." These differences are really proxies for nonresponse bias. We know the value for every case, so there isn't any nonresponse bias. The danger, as I see it, is that naive readers could miss that distinction. And I think it is an important distinction. If I say "I have found a method that reduces nonresponse bias," what will some folks hear? I think such a statement is pro...
Blogging about survey methods, responsive design, and all things survey related.