Skip to main content

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indicators in step 2.

I think this definition of responsive design would not cover "ad hoc" changes to survey designs. For example, increasing incentives when response rate targets are not met would not be responsive design. There is no planning -- no indicators defined, and no decision rules in place before the survey begins. Only after the process has gone "off the rails."

A different type of survey design might assign different protocols for different cases. For instance, you might stratify your sample by propensity to respond and then give different treatments to high and low responding cases. This does not meet the definition of responsive design as the design features are not altered based on the incoming data from the field.

You might ask, "why is pre-planning so important?" To my mind, the planning indicates that the tradeoffs have been considered. Generally, ad hoc decisions are made to minimize the damage. Ad hoc decisions are made rapidly, under less than ideal conditions. It's the same reasoning that prefers "risk managment" to "damage control."

In addition to pre-planning, responsive design also includes alteration of design features following decision rules based on incoming data from the field.

As an example of a responsive design, consider the problem of controlling variation in subgroup response rates. Our indicator, in this case, is the set of subgroup response rates. Our decision rule might be that if any subgroup response rate lags behind by more than five percentage points, then that group will be prioritized in our sample management system. In this example, there is an indicator and a decision rule that will trigger an alteration of the design. I would call this responsive design.

Adaptive design, on the other hand, I would view as having overlap with responsive designs. I became interested in "adaptive treatment regimes" through Susan Murphy here at the University of Michigan. The problems addressed by these regimes seemed similar in structure to those faced by surveys (multi-stage decisions about using a possible set of multiple treatments). The idea is that the treatment could be tailored to the characteristics of the patient and the history of previous treatments. Further, there may be interactions between treatments such that the sequence of treatments is important. She and her colleagues were working on developing statistical and experimental methods for addressing these types of problems. I tried to apply them to surveys.

In my mind, the adaptive treatment regime approach is a set of statistical methods that could be used to develop responsive designs. That is, this array of methods could be used to develop indicators and decision rules that would be used to implement responsive designs.

For example, one big question facing modes is what sequence of modes to use and when to make the switch. Smyth et al. (2010) explore the impact of the sequence and "dosage" of modes on response rates. A responsive design would specify a rule for switching modes. This situation is akin to the adaptive treatment regimes for prostate cancer investigated by Thall et al. (2000).

However, an adaptive design is not necessarily responsive design. I'll give an example from my work. I'm experimenting with an algorithm to increase contact rates. The algorithm bases the decision about when to call a case on the history of previous attempts and the frame data. This approach is adaptive in that the next step is based on the results of previous steps. At each stage, all the prior information is used to determine the next step. Does this meet the definition of responsive design? I've wavered in my own mind on this question. There is an indicator -- the ranking of the call windows for each protocol. But the design is not altered when this indicator hits a target level. There is no "phase" of data collection. I would say it is not responsive design.

For me, the term adaptive design was important as it linked my research with the adaptive treatment regime literature. I think there is a lot to be gained from applying methods from that stream of literature to surveys. And it seemed to me that it wasn't quite the same thing as responsive design.

In any case, there is clearly definitional work to be done. Proponents of either term need to clarify the meanings of these words and provide examples and counterexamples. The worst outcome would be similar strains of research developing under different terminologies without interacting with each other.


  1. James, my current understanding of the literature on both topics is that you would need to have a survey with adaptive designs in order to set the indicators and the targets for responsive design decisions in a more controlled way. In most of the adaptive design examples I have seen there is an element of randomization and learning what the best treatment is, given a set of indicators you can collect along the way. This type of information would be crucial for responsive designers, don't you think?

  2. Frauke, that was my thinking when I started doing this. But when I think about the current experiment that I'm running, the goal is to create a set of decision rules that are in place at the beginning of the design. Each case is governed by the rule. So there isn't any "phase" in the way Groves and Heeringa describe it. Maybe that's just fussiness on my part. So even if each decision or phase is triggered at very micro levels, its still fits the definition. I'm not sure.

  3. James, I have a somewhat different read of Groves & Heeringa, especially in the context of its implementation in NSFG. The third step is changing the design based on cost-error tradeoffs. For example, a higher incentive group proves to be better and a decision is made to use the higher incentive. This can be within a sample, but also across samples in a continuous data collection. The survey itself is not defined by a sample release, but by year and years. I was glad to see your post and hope we get to talk in a couple of weeks.

  4. Thanks, Andy. My point is that surveys that are forced into design changes by unplanned exigencies shouldn't label it responsive design. But maybe that's less important than the notion that decisions are based on incoming data.

  5. Thank you very much for this one. And I hope this will be useful for many people.. and I am waiting for your next post keep on updating these kinds of knowledgeable things.
    will help you more:
    Survey rules

  6. Thank you for sharing your thoughts and knowledge on this topic. This is really helpful and informative, as this gave me more insight to create more ideas and solutions for my plan. I would love to see more updates from you.

    Melbourne Web Designer

  7. i am browsing this website dailly and get nice facts from here all the time.


Post a Comment

Popular posts from this blog

Total Data Quality

In an earlier post, I suggested that survey methodologists are "data quality specialists." Our focus on " total survey error " (TSE) is, in many ways, the central defining concept of our field. This focus on data quality could be an important contribution that survey methodologists make to the emerging field of data science. But in order to make that contribution, we may need to test the fit of the TSE concept on evaluations of non-survey data. One of the sources of error in surveys that we examine in surveys is "nonresponse." Does this concept apply to other sources of data? Certainly other sources of data having missing data. But nonresponse is a specific mechanism where we sample a unit and then request data, but the unit fails to supply the data. How does this concept apply to other sources of data? I wouldn't say that Twitter data suffer from "nonresponse" due to the fact that not everyone has a Twitter account or even that not every

Response Rates and Responsive Design

A recent article by Brick and Tourangeau re-examines the data from a paper by Groves and Peytcheva (2008). The original analyses from Groves and Peytcheva were based upon 959 estimates with known variables measured on 59 surveys with varying response rates. They found very little correlation between the response rate and the bias on those 959 estimates. Brick and Tourangeau view the problem as a multi-level problem of 59 clusters (i.e. surveys) of the 959 estimates. They created for each survey a composite score based on all the bias estimates from each survey. Their results were somewhat sensitive to how the composite score was created. They do present several different ways of doing this -- simple mean, mean weighted by sample size, mean weighted by the number of estimates. Each of these study-level composite bias scores is more correlated with the response rate. They conclude: "This strongly suggests that nonresponse bias is partly a function of study-level characteristics;