I've been running an experiment on a relatively small survey (300 RDD interviews per month). Since the survey is small, I need to run the experiment over many months to accumulate enough data.
One unintended consequence of this long field period for the experiment is that I observe fluctuations over the course of the year that may indicate seasonal effects. April is the most profound example. In every other month, the experimental method produced higher contact rates than the control. But not April. In April, the control group did better.
I have at least two hypotheses about why:
1. April is one of the toughest months for contacting households. Something about the experimental method interacts with seasonal effect to produce lower contact rates for the experimental method. Seems unlikely.
2. Sampling error. If you run the experiment in enough months, one of them will come up a loser. More likely.
One unintended consequence of this long field period for the experiment is that I observe fluctuations over the course of the year that may indicate seasonal effects. April is the most profound example. In every other month, the experimental method produced higher contact rates than the control. But not April. In April, the control group did better.
I have at least two hypotheses about why:
1. April is one of the toughest months for contacting households. Something about the experimental method interacts with seasonal effect to produce lower contact rates for the experimental method. Seems unlikely.
2. Sampling error. If you run the experiment in enough months, one of them will come up a loser. More likely.
Comments
Post a Comment