Skip to main content

Posts

Showing posts with the label Panel survey

Survey Modes and Recruitment

I've been struggling with the concept of "mode preference." It's a term we use to describe the idea that respondents might have preferences for a mode and that if we can identify or predict those preferences, then we can design a better survey (i.e. by giving people their preferred mode). In practice, I worry that people don't actually prefer modes. If you ask people what mode they might prefer, they usually say the mode in which the question is asked. In other settings, the response to that sort of question is only weakly predictive of actual behavior. I'm not sure the distinction between stated and revealed preferences is going to advance the discussion much either. The problem is that the language builds in an assumption that people actually have a preference. Most people don't think about survey modes. Most don't consider modes abstractly in the way methodologists might. In fact, these choices are likely probabilistic functions that hinge on ...

Future of Responsive and Adaptive Design

A special issue of the Journal of Official Statistics on responsive and adaptive design recently appeared. I was an associate editor for the issue and helped draft an editorial that raised issues for future research in this area. The last chapter of our book on Adaptive Survey Design also defines a set of questions that may be of issue. I think one of the more important areas of research is to identify targeted design strategies. This differs from current procedures that often sequence the same protocol across all cases. For example, everyone gets web, then those who haven't responded to  web get mail. The targeted approach, on the other hand, would find a subgroup amenable to web and another amenable to mail. This is a difficult task as most design features have been explored with respect to the entire population, but we know less about subgroups. Further, we often have very little information with which to define these groups. We may not even have basic household or person ...

Learning from paradata

Susan Murphy's work on dynamic treatment regimes had a big impact on me as I was working on my dissertation. I was very excited about the prospect of learning from the paradata. I did a lot of work on trying to identify the best next step based on analysis of the history of a case. Two examples were 1) choosing the lag before the next call and the incentive, and 2) the timing of the next call. At this point, I'm a little less sure of the utility of the approach for those settings. In those settings, where I was looking at call record paradata, I think the paradata are not at all correlated with most survey outcomes. So it's difficult to identify strategies that will do anything but improve efficiency. That is, changes in strategies based on analysis of call records aren't very likely to change estimates. Still, I think there are some areas where the dynamic treatment regime approach can be useful. The first is mode switching. Modes are powerful, and offering them i...

What is the right periodicity?

It seems that intensive measurement is on the rise. There are a number of different kinds of things that are difficult to recall sufficiently over longer periods of time where it might be preferred to ask the question more frequently with a shorter reference period. For example, the number of alcoholic drinks consumed by day. More accurate measurements might be achieved if the questions was asked daily about the previous 24 hour period. But what is the right period of time? And how do you determine that? This might be an interesting question. The studies I've seen tend to guess at what the correct periodicity is. I think it's probably the case that it would require some experimentation to determine that, including experimentation in the lab. There are a couple of interesting wrinkles to this problem. 1. How do you set the periodicity when you measure several things that might have different periodicity? Ask the questions at the most frequent periodicity? 2. How does non...

Personalized Survey Design

In my last post, I talked about personalized medicine. I found out this week that in personalized medicine, there is a distinction between targeted and tailored treatments. Targeted treatments are aimed at specified subgroups of the population, while tailored protocols are individual-specific treatments that may be based in a targeted treatment, but use within-patient variation to "tune" treatments over time. I wonder if the kind of tailored protocols suggested by this kind of tailoring are possible for surveys? Panel surveys are one area where this may be possible. But it seems that the panel would have to have many waves or repetitions. There might not be enough measurement of variation with only a few waves. What's a few? Let's say fewer than 10 or 20. It seems like these methods might have an application in surveys that use frequent measurement and/or a relatively long period of time. For example, imagine a survey that collected data weekly for 2 or 3 years. O...

Selection Effects

This seems to come up in a number of different ways frequently. We talk a lot about nonresponse and how it may be a selective process such that it produces biases. We might try to model this process in order to correct these biases. Online panels and 'big data' like twitter have their own selection processes. It seems that it would be important to understand these processes. Can they be captured with simple demographics? If not, what else do we need to know? I think we have done some work on survey nonresponse. I'm not sure what is known about online panels or twitter relative to this question.

Adaptive Design and Panel Surveys

I read this very interesting blog post by Peter Lugtig yesterday. The slides from the talk he describes are also linked to the post. He builds on an analysis of classes of nonresponders. Several distinct patterns of nonresponse are identified. The characteristics of persons in each class are then described. For example, some drop out early, some "lurk" around the survey, some stay more or less permanently. He suggests that it might be smart to identify design features that are effective for each of the groups and then tailor these features to the subgroups in an adaptive design. This makes a lot of sense. And panel studies are an attractive place to start doing this kind of work. In the panel setting, there is a lot more data available on cases. This can help in identifying subgroups. And, with repeated trials of the protocol, it may be possible to improve outcomes (response) over time. I think the hard part is creating the groups. This reminds me of a problem that I read...