Skip to main content

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indicators in step 2.

I think this definition of responsive design would not cover "ad hoc" changes to survey designs. For example, increasing incentives when response rate targets are not met would not be responsive design. There is no planning -- no indicators defined, and no decision rules in place before the survey begins. Only after the process has gone "off the rails."

A different type of survey design might assign different protocols for different cases. For instance, you might stratify your sample by propensity to respond and then give different treatments to high and low responding cases. This does not meet the definition of responsive design as the design features are not altered based on the incoming data from the field.

You might ask, "why is pre-planning so important?" To my mind, the planning indicates that the tradeoffs have been considered. Generally, ad hoc decisions are made to minimize the damage. Ad hoc decisions are made rapidly, under less than ideal conditions. It's the same reasoning that prefers "risk managment" to "damage control."

In addition to pre-planning, responsive design also includes alteration of design features following decision rules based on incoming data from the field.

As an example of a responsive design, consider the problem of controlling variation in subgroup response rates. Our indicator, in this case, is the set of subgroup response rates. Our decision rule might be that if any subgroup response rate lags behind by more than five percentage points, then that group will be prioritized in our sample management system. In this example, there is an indicator and a decision rule that will trigger an alteration of the design. I would call this responsive design.

Adaptive design, on the other hand, I would view as having overlap with responsive designs. I became interested in "adaptive treatment regimes" through Susan Murphy here at the University of Michigan. The problems addressed by these regimes seemed similar in structure to those faced by surveys (multi-stage decisions about using a possible set of multiple treatments). The idea is that the treatment could be tailored to the characteristics of the patient and the history of previous treatments. Further, there may be interactions between treatments such that the sequence of treatments is important. She and her colleagues were working on developing statistical and experimental methods for addressing these types of problems. I tried to apply them to surveys.

In my mind, the adaptive treatment regime approach is a set of statistical methods that could be used to develop responsive designs. That is, this array of methods could be used to develop indicators and decision rules that would be used to implement responsive designs.

For example, one big question facing modes is what sequence of modes to use and when to make the switch. Smyth et al. (2010) explore the impact of the sequence and "dosage" of modes on response rates. A responsive design would specify a rule for switching modes. This situation is akin to the adaptive treatment regimes for prostate cancer investigated by Thall et al. (2000).

However, an adaptive design is not necessarily responsive design. I'll give an example from my work. I'm experimenting with an algorithm to increase contact rates. The algorithm bases the decision about when to call a case on the history of previous attempts and the frame data. This approach is adaptive in that the next step is based on the results of previous steps. At each stage, all the prior information is used to determine the next step. Does this meet the definition of responsive design? I've wavered in my own mind on this question. There is an indicator -- the ranking of the call windows for each protocol. But the design is not altered when this indicator hits a target level. There is no "phase" of data collection. I would say it is not responsive design.

For me, the term adaptive design was important as it linked my research with the adaptive treatment regime literature. I think there is a lot to be gained from applying methods from that stream of literature to surveys. And it seemed to me that it wasn't quite the same thing as responsive design.

In any case, there is clearly definitional work to be done. Proponents of either term need to clarify the meanings of these words and provide examples and counterexamples. The worst outcome would be similar strains of research developing under different terminologies without interacting with each other.

Comments

  1. James, my current understanding of the literature on both topics is that you would need to have a survey with adaptive designs in order to set the indicators and the targets for responsive design decisions in a more controlled way. In most of the adaptive design examples I have seen there is an element of randomization and learning what the best treatment is, given a set of indicators you can collect along the way. This type of information would be crucial for responsive designers, don't you think?

    ReplyDelete
  2. Frauke, that was my thinking when I started doing this. But when I think about the current experiment that I'm running, the goal is to create a set of decision rules that are in place at the beginning of the design. Each case is governed by the rule. So there isn't any "phase" in the way Groves and Heeringa describe it. Maybe that's just fussiness on my part. So even if each decision or phase is triggered at very micro levels, its still fits the definition. I'm not sure.

    ReplyDelete
  3. James, I have a somewhat different read of Groves & Heeringa, especially in the context of its implementation in NSFG. The third step is changing the design based on cost-error tradeoffs. For example, a higher incentive group proves to be better and a decision is made to use the higher incentive. This can be within a sample, but also across samples in a continuous data collection. The survey itself is not defined by a sample release, but by year and years. I was glad to see your post and hope we get to talk in a couple of weeks.

    ReplyDelete
  4. Thanks, Andy. My point is that surveys that are forced into design changes by unplanned exigencies shouldn't label it responsive design. But maybe that's less important than the notion that decisions are based on incoming data.

    ReplyDelete
  5. Thank you very much for this one. And I hope this will be useful for many people.. and I am waiting for your next post keep on updating these kinds of knowledgeable things.
    will help you more:
    Survey rules

    ReplyDelete
  6. Thank you for sharing your thoughts and knowledge on this topic. This is really helpful and informative, as this gave me more insight to create more ideas and solutions for my plan. I would love to see more updates from you.

    Melbourne Web Designer

    ReplyDelete

Post a Comment

Popular posts from this blog

Tailoring vs. Targeting

One of the chapters in a recent book on surveying hard-to-reach populations looks at "targeting and tailoring" survey designs. The chapter references this paper on the use of the terms among those who design health communication. I thought the article was an interesting one. They start by saying that "one way to classify message strategies like tailoring is by the level of specificity with which characteristics of the target audience are reflected in the the communication." That made sense. There is likely a continuum of specificity ranging from complete non-differentiation across units to nearly individualized. But then the authors break that continuum and try to define a "fundamental" difference between tailoring and targeting. They say targeting is for some subgroup while tailoring is to the characteristics of the individual. That sounds good, but at least for surveys, I'm not sure the distinction holds. In survey design, what would constitute

What is Data Quality, and How to Enhance it in Research

  We often talk about “data quality” or “data integrity” when we are discussing the collection or analysis of one type of data or another. Yet, the definition of these terms might be unclear, or they may vary across different contexts. In any event, the terms are somewhat abstract -- which can make it difficult, in practice, to improve. That is, we need to know what we are describing with those terms, before we can improve them. Over the last two years, we have been developing a course on   Total Data Quality , soon to be available on Coursera. We start from an error classification scheme adopted by survey methodology many years ago. Known as the “Total Survey Error” perspective, it focuses on the classification of errors into measurement and representation dimensions. One goal of our course is to expand this classification scheme from survey data to other types of data. The figure shows the classification scheme as we have modified it to include both survey data and organic forms of d

An Experimental Adaptive Contact Strategy

I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea. Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample. Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assu