I've been struggling with the concept of "mode preference." It's a term we use to describe the idea that respondents might have preferences for a mode and that if we can identify or predict those preferences, then we can design a better survey (i.e. by giving people their preferred mode).
In practice, I worry that people don't actually prefer modes. If you ask people what mode they might prefer, they usually say the mode in which the question is asked. In other settings, the response to that sort of question is only weakly predictive of actual behavior.
I'm not sure the distinction between stated and revealed preferences is going to advance the discussion much either. The problem is that the language builds in an assumption that people actually have a preference. Most people don't think about survey modes. Most don't consider modes abstractly in the way methodologists might. In fact, these choices are likely probabilistic functions that hinge on the characteristics of survey (contact mode, etc.) and unobserved characteristics of the sampled person (e.g. are they busy when they get the request). Is it a preference if one day I like and the next I don't? That might be a bit of hyperbole, but I don't believe that most people actually have stable mode preferences.
For me, the interesting thing is to identify the probability of response under different modes for subgroups in the population. That way, we can trade off errors and costs in order optimize surveys. Further, mixed modes might be a natural fit if we acknowledge that unobserved characteristics are influencing the decision to participate. I might do a web survey this month, but next month it would be easier to catch me by phone. My schedule changes in ways that the survey organization can't observe.
An interesting question might be, what characteristics can we observe or in a panel survey ask about in wave 1, that help us predict participation rates under different modes? I'm not sure what those might be, but interesting to explore.
In practice, I worry that people don't actually prefer modes. If you ask people what mode they might prefer, they usually say the mode in which the question is asked. In other settings, the response to that sort of question is only weakly predictive of actual behavior.
I'm not sure the distinction between stated and revealed preferences is going to advance the discussion much either. The problem is that the language builds in an assumption that people actually have a preference. Most people don't think about survey modes. Most don't consider modes abstractly in the way methodologists might. In fact, these choices are likely probabilistic functions that hinge on the characteristics of survey (contact mode, etc.) and unobserved characteristics of the sampled person (e.g. are they busy when they get the request). Is it a preference if one day I like and the next I don't? That might be a bit of hyperbole, but I don't believe that most people actually have stable mode preferences.
For me, the interesting thing is to identify the probability of response under different modes for subgroups in the population. That way, we can trade off errors and costs in order optimize surveys. Further, mixed modes might be a natural fit if we acknowledge that unobserved characteristics are influencing the decision to participate. I might do a web survey this month, but next month it would be easier to catch me by phone. My schedule changes in ways that the survey organization can't observe.
An interesting question might be, what characteristics can we observe or in a panel survey ask about in wave 1, that help us predict participation rates under different modes? I'm not sure what those might be, but interesting to explore.
Its a great pleasure reading your post.
ReplyDeletewill help you more:
customer survey