This is a question that I get asked quite frequently. Most of what I would want to say on the topic is in this paper I wrote with Mick Couper a couple of years ago.
I have been thinking that a little historical context might help in answering such a question. I'm not sure the paper we wrote does that. I imagine that surveys of old were designed ahead of time, carried out, and then evaluated after they were complete. Probably too simple, but it makes sense. In field surveys, it was hard to even know what was happening until it was all over.
As response rates declined, it became more difficult to manage surveys. The uncertainty grew. Surveys ended up making ad hoc changes more and more frequently. "Oh no, we aren't hitting our targets. Increase the incentive!" That seems like a bad process. There isn't any planning, so bad decisions and inefficiency are more likely. And it's hard to replicate a survey that includes a "panic" phase.
Not to put words in their mouths, but Groves and Heeringa wanted to address this situation. They give a conceptual outline for how to do so. Their approach emphasizes pre-planning (risk management) and experimentation aimed at making optimal or nearly-optimal choices with the information at hand.
To me, that's a dividing line between "ad hoc"changes and responsive design. That allows to create reproducible procedures. It also allows us to design surveys in a way that could be described as optimal, given information deficits (i.e. uncertainty).
We could probably look backwards to the "pre-Responsive Design" era and find examples of responsive design. But Groves and Heeringa gave us a systematic way to think about the problem and to create replicable research.
I have been thinking that a little historical context might help in answering such a question. I'm not sure the paper we wrote does that. I imagine that surveys of old were designed ahead of time, carried out, and then evaluated after they were complete. Probably too simple, but it makes sense. In field surveys, it was hard to even know what was happening until it was all over.
As response rates declined, it became more difficult to manage surveys. The uncertainty grew. Surveys ended up making ad hoc changes more and more frequently. "Oh no, we aren't hitting our targets. Increase the incentive!" That seems like a bad process. There isn't any planning, so bad decisions and inefficiency are more likely. And it's hard to replicate a survey that includes a "panic" phase.
Not to put words in their mouths, but Groves and Heeringa wanted to address this situation. They give a conceptual outline for how to do so. Their approach emphasizes pre-planning (risk management) and experimentation aimed at making optimal or nearly-optimal choices with the information at hand.
To me, that's a dividing line between "ad hoc"changes and responsive design. That allows to create reproducible procedures. It also allows us to design surveys in a way that could be described as optimal, given information deficits (i.e. uncertainty).
We could probably look backwards to the "pre-Responsive Design" era and find examples of responsive design. But Groves and Heeringa gave us a systematic way to think about the problem and to create replicable research.
For a Business website is very very must.In website the designing part is very very important.
ReplyDeleteWeb Design Company in Coimbatore | Best IT Company in Coimbatore
Best SEO Services Affordable Packages and Prices - SEO Experts in coimbatore, chennai Tamil Nadu, India Search engine Optimization Company in india
ReplyDelete