Thursday, September 16, 2010

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indicators in step 2.

I think this definition of responsive design would not cover "ad hoc" changes to survey designs. For example, increasing incentives when response rate targets are not met would not be responsive design. There is no planning -- no indicators defined, and no decision rules in place before the survey begins. Only after the process has gone "off the rails."

A different type of survey design might assign different protocols for different cases. For instance, you might stratify your sample by propensity to respond and then give different treatments to high and low responding cases. This does not meet the definition of responsive design as the design features are not altered based on the incoming data from the field.

You might ask, "why is pre-planning so important?" To my mind, the planning indicates that the tradeoffs have been considered. Generally, ad hoc decisions are made to minimize the damage. Ad hoc decisions are made rapidly, under less than ideal conditions. It's the same reasoning that prefers "risk managment" to "damage control."

In addition to pre-planning, responsive design also includes alteration of design features following decision rules based on incoming data from the field.

As an example of a responsive design, consider the problem of controlling variation in subgroup response rates. Our indicator, in this case, is the set of subgroup response rates. Our decision rule might be that if any subgroup response rate lags behind by more than five percentage points, then that group will be prioritized in our sample management system. In this example, there is an indicator and a decision rule that will trigger an alteration of the design. I would call this responsive design.

Adaptive design, on the other hand, I would view as having overlap with responsive designs. I became interested in "adaptive treatment regimes" through Susan Murphy here at the University of Michigan. The problems addressed by these regimes seemed similar in structure to those faced by surveys (multi-stage decisions about using a possible set of multiple treatments). The idea is that the treatment could be tailored to the characteristics of the patient and the history of previous treatments. Further, there may be interactions between treatments such that the sequence of treatments is important. She and her colleagues were working on developing statistical and experimental methods for addressing these types of problems. I tried to apply them to surveys.

In my mind, the adaptive treatment regime approach is a set of statistical methods that could be used to develop responsive designs. That is, this array of methods could be used to develop indicators and decision rules that would be used to implement responsive designs.

For example, one big question facing modes is what sequence of modes to use and when to make the switch. Smyth et al. (2010) explore the impact of the sequence and "dosage" of modes on response rates. A responsive design would specify a rule for switching modes. This situation is akin to the adaptive treatment regimes for prostate cancer investigated by Thall et al. (2000).

However, an adaptive design is not necessarily responsive design. I'll give an example from my work. I'm experimenting with an algorithm to increase contact rates. The algorithm bases the decision about when to call a case on the history of previous attempts and the frame data. This approach is adaptive in that the next step is based on the results of previous steps. At each stage, all the prior information is used to determine the next step. Does this meet the definition of responsive design? I've wavered in my own mind on this question. There is an indicator -- the ranking of the call windows for each protocol. But the design is not altered when this indicator hits a target level. There is no "phase" of data collection. I would say it is not responsive design.

For me, the term adaptive design was important as it linked my research with the adaptive treatment regime literature. I think there is a lot to be gained from applying methods from that stream of literature to surveys. And it seemed to me that it wasn't quite the same thing as responsive design.

In any case, there is clearly definitional work to be done. Proponents of either term need to clarify the meanings of these words and provide examples and counterexamples. The worst outcome would be similar strains of research developing under different terminologies without interacting with each other.

Wednesday, September 8, 2010

Interviewer Variance in Face-to-Face Surveys

There have been several important studies of interviewer variance in face-to-face surveys. O'Muircheartaigh and Campanelli (1998) report on a study that used an interpenetrated design to evaluate the impact of interviewers on variance estimates.

There are also studies that show interviewers vary in their ability to establish contact (Campanelli et al., Can you hear me knocking? 1999) and elicit response (Durrant, Groves, Staetsky, Steele, 2010).

Although O'Muricheartaigh and Campanelli  account for the clustering of the sample design, they don't account for differences in response (due to contact or refusal). It may be that variation in response rates or the composition of response may explain some (certainly not all) of the interviewer variation.

If that is the case, then attempting to control interviewer recruitment protocols (like call timing) might help reduce interviewer variance.

Followers