I'm thinking again about experiments that we run. Yes, they are usually messy. In my last post, I talked about the inherent messiness of survey experiments that is due to the fact that surveys have many design features to consider. And these features may interact in ways that mean we can't simply pull out an experiment on a single feature and generalize the result to other surveys.
But I started thinking about other problems we have with experiments. I think another big issue is that methodological experiments are often run as "add-ons" to larger surveys. It's hard to obtain funding to run a survey just to do a methodological experiment. So, we add our experiments to existing surveys.
The problem is that this approach usually creates a limitation. The experiments can't risk creating a problem for the survey. In other words, they can't lead to reductions in response rates or threaten other targets that are associated with the main (i.e. non-methodological) objective of the survey. The result is that the experiments are often contained to things that can only have small effects.
A possible exception is when a large, ongoing survey undertakes a re-design. The problem is that this only happens for large surveys, and the research is still formed by the objectives of that particular survey. I'd like to see this happen more generally. It would be nice to have some surveys that have a methodological focus that could provide results that generalize to a population of smaller-scale surveys. Such a survey could also have a secondary substantive goal.
But I started thinking about other problems we have with experiments. I think another big issue is that methodological experiments are often run as "add-ons" to larger surveys. It's hard to obtain funding to run a survey just to do a methodological experiment. So, we add our experiments to existing surveys.
The problem is that this approach usually creates a limitation. The experiments can't risk creating a problem for the survey. In other words, they can't lead to reductions in response rates or threaten other targets that are associated with the main (i.e. non-methodological) objective of the survey. The result is that the experiments are often contained to things that can only have small effects.
A possible exception is when a large, ongoing survey undertakes a re-design. The problem is that this only happens for large surveys, and the research is still formed by the objectives of that particular survey. I'd like to see this happen more generally. It would be nice to have some surveys that have a methodological focus that could provide results that generalize to a population of smaller-scale surveys. Such a survey could also have a secondary substantive goal.
Comments
Post a Comment