I'm going to be at MAPOR talking about a mixed-mode experiment that we did last year. We tried to randomize the sequence of two modes -- mail and face-to-face. The latter mode is (obviously) interviewer-administered. One of the difficulties in administering this type of experience is that it's difficult to apply the treatment (face-to-face) attempts evenly across all the cases.We can look at outcomes and not if the treatments were applied differently using simple measures like number of attempts. But that certainly doesn't capture the whole picture.
In the end, we usually default to an "intent-to-treat" analysis that acknowledges that cases will get different dosages of the treatment, but ignores the differences in actual treatment for the analysis (i.e. even cases with fewer than the prescribed number of attempts are included in the analysis. I imagine that different survey organizations would differ on these sorts of outcomes. It seems that it is important to describe who actually received the treatment and who did not. What kind of variation is there? This should help replicate results across organizations.