This post by Andy Peytchev got me thinking about experimental results. It seems like we spend a lot of effort on experiments that are replicated elsewhere. I've been part of many incentive experiments. Only some of those results are published. It would be nice if more of those results were widely available.
Each study is a little different, and may need to evaluate incentives for its specific "essential conditions." And some of that replication is good, but it seems that the overall design of these experiments is pretty inefficient. We typically evaluate incentives at specific points in time, then change the incentive. It's like a step function.
I keep thinking there has to be inefficiency in that process. First, if we don't choose the right time to try a new experiment then we will experience losses in efficiency and/or response rates. Second, we typically ignore our prior information and allocate half the sample to each of two conditions. Third, we set up ad hoc methods for administering the treatments since our systems (broadly defined) are designed for single treatments.
It might be nice to design a method for continually assessing incentives. I think this would involve low-level continual monitoring. This could be more efficient, and may even be a useful service for other, similar surveys.
Each study is a little different, and may need to evaluate incentives for its specific "essential conditions." And some of that replication is good, but it seems that the overall design of these experiments is pretty inefficient. We typically evaluate incentives at specific points in time, then change the incentive. It's like a step function.
I keep thinking there has to be inefficiency in that process. First, if we don't choose the right time to try a new experiment then we will experience losses in efficiency and/or response rates. Second, we typically ignore our prior information and allocate half the sample to each of two conditions. Third, we set up ad hoc methods for administering the treatments since our systems (broadly defined) are designed for single treatments.
It might be nice to design a method for continually assessing incentives. I think this would involve low-level continual monitoring. This could be more efficient, and may even be a useful service for other, similar surveys.
Comments
Post a Comment