Friday, December 12, 2014

Context and Daily Surveys

I've been reading a very interesting book on daily diary surveys. One of the chapters, by Norbert Schwarz, makes some interesting points about how frequent measurement might not be the same as a one-time measurement of similar phenomena.

Schwarz points to the well-known studies that he did where they varied the scale of measurement. One of the questions was about how much TV people watch. One scale had a maximum of something like 10 or more hours per week, while the other had a maximum of 2.5 hours per week. The reported distributions changed across the two different scales. It seems that people were taking normative cues from the scale, i.e. if 2.5 hours is a lot, "I must view less than that," or "I don't want to report that I watch that much TV when most other people are watching less."

He points out that daily surveys may provide similar context clues about normative behavior. If you ask someone about depressive episodes every day, they may infer that the norm is to have frequent depressive episodes. This may influence their response. If the goal is to get more accurate data, these sorts of influences of the method are not good.


Friday, December 5, 2014

Device Usage in Web Surveys

As I have been working on a web survey, I'm following more closely the devices that people are using to complete web surveys. The results from Pew make it seem that the younger generation will move away from PCs and access the internet through portable devices like smart phones. Some of these "portable" devices have become quite large.

This trend makes sense to me. I can do many/most things from my phone. I heard on the news the other day, that 25% of Cyber Monday shopping was done with tablets and phones. But some things are easier to do with a PC. Do surveys fit into the latter group?

Peter Lugtig posted about a study he is working on that tracks the device used in waves of a panel survey. It appears that those who start on a PC, stay on a PC. But those who start on a tablet or phone are more likely to switch to a PC. He also notes that if you used a tablet or phone in an early wave, you are less likely to do the survey at all in the next wave.

I didn't read the paper (there is a link on the blog). I'm wondering about explanations. Could the experience be improved to avoid driving respondents to other devices or, worse, non-participation? Would that require formatting/design changes? Or reduction in length? If not, is it better to push respondents (in the panel context) to use a PC instead if this means getting more data (i.e. fewer persons for more waves)?

Followers