We often run experiments with incentives. The context seems to matter a lot, and the value of the incentive keeps changing, so we need to run many experiments to find the "right" amount to offer.
These experiments often come on repeated cross-sectional designs where we have a fair amount of experience. That is, we have already repeated the survey several times with a specific incentive payment.
Yet, when we run experiments in this situation, we ignore the evidence from prior iterations. Of course, there are problems with the evidence from past iterations. There can be differences over time in the impact that a particular incentive can have. For example, it might be that as the value of the incentive declines through inflation, the impact on response rates lessens. There may be other differences that are associated with other changes that are made in the survey over time (even undocumented, seemingly minor changes).
On the other hand, to totally discount this evidence seems to be not very cost-effective. Then I recalled some literature on clinical trials. Ethically, ignoring data is very bad for new treatment development in clinical trials. I recalled from this textbook by Spiegelhalter, Abrams and Myles a discussion of a method for using historical controls. It seems like an idea that might be useful for this incentive evaluation problem.
These experiments often come on repeated cross-sectional designs where we have a fair amount of experience. That is, we have already repeated the survey several times with a specific incentive payment.
Yet, when we run experiments in this situation, we ignore the evidence from prior iterations. Of course, there are problems with the evidence from past iterations. There can be differences over time in the impact that a particular incentive can have. For example, it might be that as the value of the incentive declines through inflation, the impact on response rates lessens. There may be other differences that are associated with other changes that are made in the survey over time (even undocumented, seemingly minor changes).
On the other hand, to totally discount this evidence seems to be not very cost-effective. Then I recalled some literature on clinical trials. Ethically, ignoring data is very bad for new treatment development in clinical trials. I recalled from this textbook by Spiegelhalter, Abrams and Myles a discussion of a method for using historical controls. It seems like an idea that might be useful for this incentive evaluation problem.
Comments
Post a Comment