Sorry for the long layoff! I had a very busy Spring.
I've been working with the results of an experiment we ran on a survey last year. The experimental condition that we wanted to vary was the mode of contact. The results were a bit messy. We didn't have complete control of all the conditions. The main issue was that we couldn't insure that sampled units in each arm of the experiment received the same treatment (equality of effort -- number of calls distributed over windows and across time in equivalent manner).
This is a common problem for experiments that we run. Most of our experiments are 'piggy-backed' onto data collections for which the experiment is a lower priority. They actually need to collect data.
I've been focused on the negatives, the messiness of this situation. But there is a positive. Most of these experiments are embedded in real-world situations. Hence, they should have greater external validity. If we try them again, many of the same essential conditions will be replicated. If we were to control all aspects of the survey, we'd create a very artificial situation that most surveys will never replicate.
I also suspect that there are important interactions between some of this messiness and the condition that we meant to control. For this reason, it's important to report as much as we can on the messiness that make up the essential survey conditions.
I've been working with the results of an experiment we ran on a survey last year. The experimental condition that we wanted to vary was the mode of contact. The results were a bit messy. We didn't have complete control of all the conditions. The main issue was that we couldn't insure that sampled units in each arm of the experiment received the same treatment (equality of effort -- number of calls distributed over windows and across time in equivalent manner).
This is a common problem for experiments that we run. Most of our experiments are 'piggy-backed' onto data collections for which the experiment is a lower priority. They actually need to collect data.
I've been focused on the negatives, the messiness of this situation. But there is a positive. Most of these experiments are embedded in real-world situations. Hence, they should have greater external validity. If we try them again, many of the same essential conditions will be replicated. If we were to control all aspects of the survey, we'd create a very artificial situation that most surveys will never replicate.
I also suspect that there are important interactions between some of this messiness and the condition that we meant to control. For this reason, it's important to report as much as we can on the messiness that make up the essential survey conditions.
Comments
Post a Comment