Skip to main content

Adaptive and Responsive Design

I've raised this topic a couple of times here. Several years ago, Groves and Heeringa (2006) proposed an approach to survey data collection that they called "Responsive Design." The design was rolled out in phases with information from prior phases being used to tailor the design in later phases.

In my dissertation, I wrote about "Adaptive Survey Design." For me, the main point of using the term "adaptive" was to link to the research on adaptive treatment regimes, especially as proposed by Susan Murphy and her colleagues.

I hadn't thought much about the relationship between the two. At the time, I saw what I was doing as a subset of responsive designs.

Since then, Barry Schouten and Melania Calinescu at Statistics Netherlands have defined "adaptive static" and "adaptive dynamic" designs. Adaptive static designs tailor the protocol to information on the sampling frame. For example, determining the mode of contact for each case by its characteristics on the frame, like age. Adaptive dynamic designs tailor the design to incoming paradata. A refusal conversion protocol might be a commonly used example. Changing incentives based on paradata might be another example. The "adaptive dynamic" designs seem to come closest to the kind of designs I envisioned when writing my dissertation.

Over the summer, Mick Couper and I gave a talk on responsive designs. We included some definitional discussion. It was Mick's idea to describe these designs along a continuum. The dimension of the continuum involves how much tailoring there is. On one end, single protocol surveys apply the same protocol to every case. On the other end of the spectrum, adaptive treatment regimes provide individually-tailored protocols. Here's a graphic:


The definitions of these various terms may still be fluid. The important thing is that folks who are working on similar things be able to communicate and build upon each others results.

Comments

  1. Thank you for posting such a useful, impressive and looking good!

    ReplyDelete
  2. Thanks for the post, it's really interesting and helpful :)

    ReplyDelete

Post a Comment

Popular posts from this blog

"Responsive Design" and "Adaptive Design"

My dissertation was entitled "Adaptive Survey Design to Reduce Nonresponse Bias." I had been working for several years on "responsive designs" before that. As I was preparing my dissertation, I really saw "adaptive" design as a subset of responsive design.

Since then, I've seen both terms used in different places. As both terms are relatively new, there is likely to be confusion about the meanings. I thought I might offer my understanding of the terms, for what it's worth.

The term "responsive design" was developed by Groves and Heeringa (2006). They coined the term, so I think their definition is the one that should be used. They defined "responsive design" in the following way:

1. Preidentify a set of design features that affect cost and error tradeoffs.
2. Identify indicators for these costs and errors. Monitor these during data collection.
3. Alter the design features based on pre-identified decision rules based on the indi…

Response Rates and Responsive Design

A recent article by Brick and Tourangeau re-examines the data from a paper by Groves and Peytcheva (2008). The original analyses from Groves and Peytcheva were based upon 959 estimates with known variables measured on 59 surveys with varying response rates. They found very little correlation between the response rate and the bias on those 959 estimates.

Brick and Tourangeau view the problem as a multi-level problem of 59 clusters (i.e. surveys) of the 959 estimates. They created for each survey a composite score based on all the bias estimates from each survey. Their results were somewhat sensitive to how the composite score was created. They do present several different ways of doing this -- simple mean, mean weighted by sample size, mean weighted by the number of estimates. Each of these study-level composite bias scores is more correlated with the response rate. They conclude: "This strongly suggests that nonresponse bias is partly a function of study-level characteristics; th…

An Experimental Adaptive Contact Strategy

I'm running an experiment on contact methods in a telephone survey. I'm going to present the results of the experiment at the FCSM conference in November. Here's the basic idea.

Multi-level models are fit daily with the household being a grouping factor. The models provide household-specific estimates of the probability of contact for each of four call windows. The predictor variables in this model are the geographic context variables available for an RDD sample.

Let $\mathbf{X_{ij}}$ denote a $k_j \times 1$ vector of demographic variables for the $i^{th}$ person and $j^{th}$ call. The data records are calls. There may be zero, one, or multiple calls to household in each window. The outcome variable is an indicator for whether contact was achieved on the call. This contact indicator is denoted $R_{ijl}$ for the $i^{th}$ person on the $j^{th}$ call to the $l^{th}$ window. Then for each of the four call windows denoted $l$, a separate model is fit where each household is assum…