Skip to main content

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indicators in step 2.

I think this definition of responsive design would not cover "ad hoc" changes to survey designs. For example, increasing incentives when response rate targets are not met would not be responsive design. There is no planning -- no indicators defined, and no decision rules in place before the survey begins. Only after the process has gone "off the rails."

A different type of survey design might assign different protocols for different cases. For instance, you might stratify your sample by propensity to respond and then give different treatments to high and low responding cases. This does not meet the definition of responsive design as the design features are not altered based on the incoming data from the field.

You might ask, "why is pre-planning so important?" To my mind, the planning indicates that the tradeoffs have been considered. Generally, ad hoc decisions are made to minimize the damage. Ad hoc decisions are made rapidly, under less than ideal conditions. It's the same reasoning that prefers "risk managment" to "damage control."

In addition to pre-planning, responsive design also includes alteration of design features following decision rules based on incoming data from the field.

As an example of a responsive design, consider the problem of controlling variation in subgroup response rates. Our indicator, in this case, is the set of subgroup response rates. Our decision rule might be that if any subgroup response rate lags behind by more than five percentage points, then that group will be prioritized in our sample management system. In this example, there is an indicator and a decision rule that will trigger an alteration of the design. I would call this responsive design.

Adaptive design, on the other hand, I would view as having overlap with responsive designs. I became interested in "adaptive treatment regimes" through Susan Murphy here at the University of Michigan. The problems addressed by these regimes seemed similar in structure to those faced by surveys (multi-stage decisions about using a possible set of multiple treatments). The idea is that the treatment could be tailored to the characteristics of the patient and the history of previous treatments. Further, there may be interactions between treatments such that the sequence of treatments is important. She and her colleagues were working on developing statistical and experimental methods for addressing these types of problems. I tried to apply them to surveys.

In my mind, the adaptive treatment regime approach is a set of statistical methods that could be used to develop responsive designs. That is, this array of methods could be used to develop indicators and decision rules that would be used to implement responsive designs.

For example, one big question facing modes is what sequence of modes to use and when to make the switch. Smyth et al. (2010) explore the impact of the sequence and "dosage" of modes on response rates. A responsive design would specify a rule for switching modes. This situation is akin to the adaptive treatment regimes for prostate cancer investigated by Thall et al. (2000).

However, an adaptive design is not necessarily responsive design. I'll give an example from my work. I'm experimenting with an algorithm to increase contact rates. The algorithm bases the decision about when to call a case on the history of previous attempts and the frame data. This approach is adaptive in that the next step is based on the results of previous steps. At each stage, all the prior information is used to determine the next step. Does this meet the definition of responsive design? I've wavered in my own mind on this question. There is an indicator -- the ranking of the call windows for each protocol. But the design is not altered when this indicator hits a target level. There is no "phase" of data collection. I would say it is not responsive design.

For me, the term adaptive design was important as it linked my research with the adaptive treatment regime literature. I think there is a lot to be gained from applying methods from that stream of literature to surveys. And it seemed to me that it wasn't quite the same thing as responsive design.

In any case, there is clearly definitional work to be done. Proponents of either term need to clarify the meanings of these words and provide examples and counterexamples. The worst outcome would be similar strains of research developing under different terminologies without interacting with each other.

Comments

  1. James, my current understanding of the literature on both topics is that you would need to have a survey with adaptive designs in order to set the indicators and the targets for responsive design decisions in a more controlled way. In most of the adaptive design examples I have seen there is an element of randomization and learning what the best treatment is, given a set of indicators you can collect along the way. This type of information would be crucial for responsive designers, don't you think?

    ReplyDelete
  2. Frauke, that was my thinking when I started doing this. But when I think about the current experiment that I'm running, the goal is to create a set of decision rules that are in place at the beginning of the design. Each case is governed by the rule. So there isn't any "phase" in the way Groves and Heeringa describe it. Maybe that's just fussiness on my part. So even if each decision or phase is triggered at very micro levels, its still fits the definition. I'm not sure.

    ReplyDelete
  3. James, I have a somewhat different read of Groves & Heeringa, especially in the context of its implementation in NSFG. The third step is changing the design based on cost-error tradeoffs. For example, a higher incentive group proves to be better and a decision is made to use the higher incentive. This can be within a sample, but also across samples in a continuous data collection. The survey itself is not defined by a sample release, but by year and years. I was glad to see your post and hope we get to talk in a couple of weeks.

    ReplyDelete
  4. Thanks, Andy. My point is that surveys that are forced into design changes by unplanned exigencies shouldn't label it responsive design. But maybe that's less important than the notion that decisions are based on incoming data.

    ReplyDelete
  5. Thank you very much for this one. And I hope this will be useful for many people.. and I am waiting for your next post keep on updating these kinds of knowledgeable things.
    will help you more:
    Survey rules

    ReplyDelete
  6. Thank you for sharing your thoughts and knowledge on this topic. This is really helpful and informative, as this gave me more insight to create more ideas and solutions for my plan. I would love to see more updates from you.

    Melbourne Web Designer

    ReplyDelete

Post a Comment

Popular posts from this blog

Assessment of Maching Learning Classifiers

I heard another interesting episode of the Data Skeptic podcast . They were discussing how a classifier could be assessed (episode 121). Many machine learning models are so complex that a human being can't really interpret the meaning of the model. This can lead to problems. They gave an example of a problem where they had a bunch of posts from two discussion boards. One was atheist and the other board was composed of Christians. They tried to classify each post as being from one or the other board. There was one poster who posted heavily on the Christian board. His name was Keith. Sadly, the model learned that if the person who was posting was named Keith, then they were Christian. The problem is that this isn't very useful for prediction. It's an artifact of the input data. Even cross-validation would eliminate this problem. A human being can see the issue, but a model can't. In any event, the proposed solution was to build interpretable models in local areas of t

Tailoring vs. Targeting

One of the chapters in a recent book on surveying hard-to-reach populations looks at "targeting and tailoring" survey designs. The chapter references this paper on the use of the terms among those who design health communication. I thought the article was an interesting one. They start by saying that "one way to classify message strategies like tailoring is by the level of specificity with which characteristics of the target audience are reflected in the the communication." That made sense. There is likely a continuum of specificity ranging from complete non-differentiation across units to nearly individualized. But then the authors break that continuum and try to define a "fundamental" difference between tailoring and targeting. They say targeting is for some subgroup while tailoring is to the characteristics of the individual. That sounds good, but at least for surveys, I'm not sure the distinction holds. In survey design, what would constitute

Proxy Y's

My last post was a bit of crankiness about the term "nonresponse bias." There is a bit of terminology, on the other hand, that I do like -- "Proxy Y's." We used this term in a paper a while ago. The thing that I like about this term, is that it puts the focus on the prediction of Y. Based on the paper by Little and Vartivarian (2005), this seemed like a more useful thing to have. And we spent time looking for things that could fit the bill. If we have something like this, the difference between responders and the full sample might be a good proxy for bias with the actual Y's. I'm not backtracking here -- it's still not "nonresponse bias" in my book. It's just a proxy for it. The paper we wrote found that good proxy Y's are hard to find. Still, it's worth looking. And, as I said, the term keeps us focused on finding these elusive measures.