Friday, February 3, 2017

Survey Data and Big Data... or is it Big Data and Survey Data

It seems like survey folks have thought about the use of big data  mostly as a problem of linking big data to survey data. This is certainly a very useful thing to do. The model starts from the survey data, and adds big data. This reduces the burden on respondents and may improve the accuracy of data.

But I am also having conversations that start from big data, and then fill the gaps with survey data. For instance, in looking for suitable readings on using big data and survey data, I found several interesting articles that come from folks working with big data who use survey data to validate the logical inferences they make from the data as with this study of travel based upon GPS data, or to understand missing data in electronic health records as with this study.

Now I'm also hearing discussion of how surveys might be triggered by events in the big data. The survey can answer the "why" question. Why the change? This makes for an interesting idea. The big data are the starting point while the survey data are additional to fill the gaps.

Both approaches are valid and useful. As we develop more and more approaches to these uses of data, we may need some new taxonomies to help us think through all the options we have.

Friday, January 20, 2017

What is the right periodicity?

It seems that intensive measurement is on the rise. There are a number of different kinds of things that are difficult to recall sufficiently over longer periods of time where it might be preferred to ask the question more frequently with a shorter reference period. For example, the number of alcoholic drinks consumed by day. More accurate measurements might be achieved if the questions was asked daily about the previous 24 hour period.

But what is the right period of time? And how do you determine that? This might be an interesting question. The studies I've seen tend to guess at what the correct periodicity is. I think it's probably the case that it would require some experimentation to determine that, including experimentation in the lab.

There are a couple of interesting wrinkles to this problem.

1. How do you set the periodicity when you measure several things that might have different periodicity? Ask the questions at the most frequent periodicity?

2. How does nonresponse/attrition fit into this? If some people will only respond at a certain rate, what should you do? Is it better to force the issue with them, i.e. make an ultimatum that they participate at the rate we desire or not at all; or better to allow them to participate at their preferred rate?

I'm sure the answers vary across the substantive areas of interest. But it does seem like an interesting set of problems in the evolving world of survey research.

Friday, January 13, 2017

Slowly Declining Response Rates are the Worst!

I have seen this issue on several different projects. So I'm not calling out anyone in particular. I keep running into this issue. Repeated cross-sectional surveys are the most glaring example, but I think it happens other places as well.

The issue is that with a slow decline, it's difficult to diagnose the source of the problem. If everything is just a little bit more difficult (i.e. if contacting persons, convincing people to list a household, finding the selected person, convincing them to do the survey, and so on), then it's difficult to identify solutions.


One issue that this sometimes creates is that we keep adding a little more effort each time to try to counteract the decline. A few additional more calls. A slightly longer field period. We don't then search for qualitatively different solutions.

That's not to say that we shouldn't make the small changes. Rather, that they might need to be combined with longer term planning for larger changes. That's often difficult to do. But another argument for ongoing experimentation with new methods.

Friday, December 9, 2016

Responsive Survey Design Short Course

I don't do a whole lot of advertising on the blog, but I did want to post about a set of short courses that we will be offering here in Ann Arbor next summer. These courses are the first three days of what will eventually be a full two-week course. We have some great instructors lined up. We are going to teach techniques of responsive survey design that can be used across a variety of studies. If you are interested, follow this link for more information.

Friday, November 18, 2016

The Cost of a Call Attempt

We recently did an experiment with incentives on a face-to-face survey. As one aspect of the evaluation of the experiment, we looked at the costs associated with each treatment (i.e. different incentive amounts).

The costs are a bit complicated to parse out. The incentive amount is easy, but the interviewer time is hard. Interviewers record their time for at the day level, not at the housing unit level. So it's difficult to determine how much a call attempt costs.

Even if we had accurate data on the time spent making the call attempt, there would still be all the travel time from the interviewer's home to the area segment. If I could accurately calculate that, how would I spread it across the cost of call attempts? This might not matter if all I'm interested in is calculating the marginal cost of adding an attempt to a visit to an area segment. But if I want to evaluate a treatment -- like the incentive experiment -- I need to account for all the interviewer costs, as best as I can.

A simple approach is to just divide the interviewer hours by the total number of call attempts. This gives an average that might be useful for some purposes. Or I can try to account for differences in lengths of different types of call attempt outcomes. If the distribution of types of outcomes differ across treatments, then the average length of any attempt might not be a fair comparison of the costs of the two treatments.

I suspect that the problem can only be "solved" by defining the specific purpose for the estimate. Then thinking about how errors in the estimate might impact the decision. In other words, how bad does the estimate have to be to lead you to the wrong decision? I think there are a number of interesting cost problems like this, where we haven't measured the costs directly, but need to use some proxy measure that might have errors of different kinds.

Friday, November 11, 2016

Methodology on the Margins

I'm thinking again about experiments that we run. Yes, they are usually messy. In my last post, I talked about the inherent messiness of survey experiments that is due to the fact that surveys have many design features to consider. And these features may interact in ways that mean we can't simply pull out an experiment on a single feature and generalize the result to other surveys.

But I started thinking about other problems we have with experiments. I think another big issue is that methodological experiments are often run as "add-ons" to larger surveys. It's hard to obtain funding to run a survey just to do a methodological experiment. So, we add our experiments to existing surveys.

The problem is that this approach usually creates a limitation. The experiments can't risk creating a problem for the survey. In other words, they can't lead to reductions in response rates or threaten other targets that are associated with the main (i.e. non-methodological) objective of the survey. The result is that the experiments are often contained to things that can only have small effects.

A possible exception is when a large, ongoing survey undertakes a re-design. The problem is that this only happens for large surveys, and the research is still formed by the objectives of that particular survey. I'd like to see this happen more generally. It would be nice to have some surveys that have a methodological focus that could provide results that generalize to a population of smaller-scale surveys. Such a survey could also have a secondary substantive goal.

Friday, November 4, 2016

Messy Experiments

I have this feeling that survey experiments are often very messy. Maybe it's just in comparison to the ideal type -- a laboratory with a completely controlled environment where only one variable is altered between two randomly assigned groups.

But still, surveys have a very complicated structure. We often call this the "essential survey conditions." But that glib phrase might hide some important details. My concern is that when we focus on a single feature of a survey design, e.g. incentives, we might come to the wrong conclusion if we don't consider how that feature interacts with other design features.

This matters when we attempt to generalize from published research to another situation. If we only focus on a single feature, we might come to the wrong conclusion. Take the well-known result -- incentives work! Except that the impact of incentives seems to be different for interviewer-administered surveys than for self-administered surveys. The other features of the design are also important and may mediate the expected results of the feature under consideration.

Every time I start to write a literature review, this issue comes up in my mind as I try to reconcile the inevitably conflicting results. Of course, there are other problems, such as the normal noise associated with published research results. But, there is this other potential reason out there that should be kept in mind.

The other side of the issues comes up when I'm writing up the methods used. Then I have to remind myself to be as detailed as possible in describing the survey design features so that the context of the results will be clear.


Followers