This post is a follow-up on my last. Since my last post, I came across an interesting article at Survey Practice. I'm really pleased to see this article, since this is a discussion we really need to have. The article, by Koen Beullens and Geert Loosveldt, presents the results of a simulation study on the impact of using different indicators to govern data collection. In other words, the simulate the consequences of maximizing different indicators in data collection. The three indicators are the response rate, the R-Indicator (Schouten, et al., 2009), and the maximal bias (also developed by Schouten et al. 2009). The simulation shows a situation where you would get a different result from maximizing either of the latter two indicators compared to when you maximize the response rate. Maximizing the R-Indicator, for example, led to a slightly lower response rate than the data collection strategy that maximizes the response rate.
This is an interesting simulation. It pretty clearly explores the distortions that can occur when maximizing the response rate is the goal.
However, I don't see it as convincing evidence that we should radically change our data collection procedures. As I mentioned in my last post, I wouldn't want anyone to conclude that lowering response rates is always OK. The problem is certainly more complicated than that.I would contend that we need experimental evidence regarding the impact of using other indicators to guide data collection.
In the first instance, data collection is such a complex activity that it is impossible to describe all the 'essential features' of any design. It's even more difficult to understand the impact of all these choices. Before changing those practices, we should understand the consequences of doing so. We wouldn't want to throw out the baby with the bathwater. In my mind, that requires experimental evidence.
It is also the case that each of the indicators proposed has weaknesses. If the model underlying the R-Indicator is misspecified, this could lead to inefficient or even bias-increasing actions. It would be good to understand when and how this might happen -- and what protections against this we might develop. My view is that this will require a constellation of indicators that tell an underlying story.
The good news is that this would require the work of many survey methodologists.
This is an interesting simulation. It pretty clearly explores the distortions that can occur when maximizing the response rate is the goal.
However, I don't see it as convincing evidence that we should radically change our data collection procedures. As I mentioned in my last post, I wouldn't want anyone to conclude that lowering response rates is always OK. The problem is certainly more complicated than that.I would contend that we need experimental evidence regarding the impact of using other indicators to guide data collection.
In the first instance, data collection is such a complex activity that it is impossible to describe all the 'essential features' of any design. It's even more difficult to understand the impact of all these choices. Before changing those practices, we should understand the consequences of doing so. We wouldn't want to throw out the baby with the bathwater. In my mind, that requires experimental evidence.
It is also the case that each of the indicators proposed has weaknesses. If the model underlying the R-Indicator is misspecified, this could lead to inefficient or even bias-increasing actions. It would be good to understand when and how this might happen -- and what protections against this we might develop. My view is that this will require a constellation of indicators that tell an underlying story.
The good news is that this would require the work of many survey methodologists.
Comments
Post a Comment