One problem that we face in evaluating the experiments in face-to-face surveys where the interviewer decides when to call, leave SIMY cards,etc. is that we don't know whether the interviewer followed our recommendation. Maybe they just happened to do the very thing we recommended without viewing our recommendation. I'm facing this problem with both the experiment involving SIMY card use and the call scheduling experiment.
We could ask them if they followed the recommendation, but their answers are unlikely to be reliable. My current plan is to save the statistical recommendation for all cases (experimental vs control) and compare how often the recommendation is followed in both groups. In the control group, the recommendation is never revealed to the interviewer. If the recommendation is "followed" more in the group where it is revealed, then it appears that it did have an impact on the choices the interviewers made.
We could ask them if they followed the recommendation, but their answers are unlikely to be reliable. My current plan is to save the statistical recommendation for all cases (experimental vs control) and compare how often the recommendation is followed in both groups. In the control group, the recommendation is never revealed to the interviewer. If the recommendation is "followed" more in the group where it is revealed, then it appears that it did have an impact on the choices the interviewers made.
Comments
Post a Comment