One of the hidden costs of paradata are the time spent analyzing these data. Here, we've spent a lot of time trying to find standard ways to convert these data into useful information. But many times, we end up doing specialized analyses. Searching for an explanation of some issue. And, sometimes, this analysis doesn't lead to clear-cut answers.

In any event, paradata aren't just collected, they are also managed and analyzed. So there are costs for generating information from these data. We could probably think of this in a total survey error perspective. "Does this analysis reduce total error more than increasing the number of interviews?" In practice, such a question is difficult to answer. What is the value of the analysis we never did? And how much would it have cost?

There might be two extreme policies in this regard. One is "paralysis by analysis." Continually seeking information and delaying decisions. The other extreme is "flying by the seat of the pants," making uninformed decisions frequently. Most of the time, we choose a policy somewhere between these two extremes and hope that we made a nearly optimal choice.

Perhaps the whole problem goes away as we become more adept at manipulating large, complicated data structures. Maybe. I can at least say that hasn't happened yet.