I have this feeling that survey experiments are often very messy. Maybe it's just in comparison to the ideal type -- a laboratory with a completely controlled environment where only one variable is altered between two randomly assigned groups.
But still, surveys have a very complicated structure. We often call this the "essential survey conditions." But that glib phrase might hide some important details. My concern is that when we focus on a single feature of a survey design, e.g. incentives, we might come to the wrong conclusion if we don't consider how that feature interacts with other design features.
This matters when we attempt to generalize from published research to another situation. If we only focus on a single feature, we might come to the wrong conclusion. Take the well-known result -- incentives work! Except that the impact of incentives seems to be different for interviewer-administered surveys than for self-administered surveys. The other features of the design are also important and may mediate the expected results of the feature under consideration.
Every time I start to write a literature review, this issue comes up in my mind as I try to reconcile the inevitably conflicting results. Of course, there are other problems, such as the normal noise associated with published research results. But, there is this other potential reason out there that should be kept in mind.
The other side of the issues comes up when I'm writing up the methods used. Then I have to remind myself to be as detailed as possible in describing the survey design features so that the context of the results will be clear.
But still, surveys have a very complicated structure. We often call this the "essential survey conditions." But that glib phrase might hide some important details. My concern is that when we focus on a single feature of a survey design, e.g. incentives, we might come to the wrong conclusion if we don't consider how that feature interacts with other design features.
This matters when we attempt to generalize from published research to another situation. If we only focus on a single feature, we might come to the wrong conclusion. Take the well-known result -- incentives work! Except that the impact of incentives seems to be different for interviewer-administered surveys than for self-administered surveys. The other features of the design are also important and may mediate the expected results of the feature under consideration.
Every time I start to write a literature review, this issue comes up in my mind as I try to reconcile the inevitably conflicting results. Of course, there are other problems, such as the normal noise associated with published research results. But, there is this other potential reason out there that should be kept in mind.
The other side of the issues comes up when I'm writing up the methods used. Then I have to remind myself to be as detailed as possible in describing the survey design features so that the context of the results will be clear.
Comments
Post a Comment