Skip to main content

Posts

Showing posts from September, 2017

Responsive Design and Sampling Variability II

Just continuing the thought from the previous post...

Some examples of controlling the variability don't make much sense. For instance, there is no real difference between a response rate of 69% and one of 70%. Except for the largest of samples. Yet, there is often a "face validity" claim that there is a big difference in that 70% is an important line to cross.

However, for survey costs, it can be a big difference if the budgeted amount is $1,000,000 and the actual cost is $1,015,000. Although this is roughly the same proportionate difference as the response rates, going over a budget can have many negative consequences. In this case, controlling the variability can be critical. Although the costs might be "noise" in some sense, they are real.

Responsive design and sampling variability

At the Joint Statistical Meetings, I went to a session on responsive and adaptive design. One of the speakers, Barry Schouten, contrasted responsive and adaptive designs. One of the contrasts was that responsive design was concerned with controlling short-term fluctuations in outcomes such as response rates.

This got me thinking. I think the idea is that responsive design will respond to the current data, which includes some sampling error. In fact, it's possible that sampling error could be the sole driver of responsive design interventions in some cases. I don't think this is usually the case, but it certainly is part of what responsive designs might do.

At first, this seemed like a bad feature. One could imagine that all responsive design interventions should include a feature that accounts for sampling error. For instance, decision rules that attain a level of statistical significance. We've implemented some like that.

On the other hand, sometimes controlling sampling …