In an earlier post, I suggested that survey methodologists are "data quality specialists." Our focus on "total survey error" (TSE) is, in many ways, the central defining concept of our field. This focus on data quality could be an important contribution that survey methodologists make to the emerging field of data science. But in order to make that contribution, we may need to test the fit of the TSE concept on evaluations of non-survey data.
One of the sources of error in surveys that we examine in surveys is "nonresponse." Does this concept apply to other sources of data? Certainly other sources of data having missing data. But nonresponse is a specific mechanism where we sample a unit and then request data, but the unit fails to supply the data.
How does this concept apply to other sources of data? I wouldn't say that Twitter data suffer from "nonresponse" due to the fact that not everyone has a Twitter account or even that not everyone tweets on topics of interest. To me, those issues are more similar to problems of the sampling frame, i.e. coverage. There was no expectation of observing data.
Another example comes from a study of electronic health records (EHR) conducted by Haneuse and colleagues. In this case, they wanted to test the association between weight and depression. They wanted the patient's weight from the EHR. The medical system they work with has the policy of weighing patients on every visit. So, we would expect ever record of a visit to include the weight. But the records are missing weight about 30% of the time. In my view, that is akin to nonresponse. We expect a value to be recorded, but it isn't. Is "nonresponse" the right word? I'm not sure.
We'll need to make an effort to get the terms right. They need to resonate with those who work with other sources of data. "Nonresponse" probably doesn't. "Missing data" is too general. What is the right term?
One of the sources of error in surveys that we examine in surveys is "nonresponse." Does this concept apply to other sources of data? Certainly other sources of data having missing data. But nonresponse is a specific mechanism where we sample a unit and then request data, but the unit fails to supply the data.
How does this concept apply to other sources of data? I wouldn't say that Twitter data suffer from "nonresponse" due to the fact that not everyone has a Twitter account or even that not everyone tweets on topics of interest. To me, those issues are more similar to problems of the sampling frame, i.e. coverage. There was no expectation of observing data.
Another example comes from a study of electronic health records (EHR) conducted by Haneuse and colleagues. In this case, they wanted to test the association between weight and depression. They wanted the patient's weight from the EHR. The medical system they work with has the policy of weighing patients on every visit. So, we would expect ever record of a visit to include the weight. But the records are missing weight about 30% of the time. In my view, that is akin to nonresponse. We expect a value to be recorded, but it isn't. Is "nonresponse" the right word? I'm not sure.
We'll need to make an effort to get the terms right. They need to resonate with those who work with other sources of data. "Nonresponse" probably doesn't. "Missing data" is too general. What is the right term?
This reminds me a lot of the work done by Steven Kotler in regards to "Flow States" and defining a lexicon. You mention EHR and the issue "nonresponse" values pose to data quality. I am puzzling over a similar issue myself and would love to chat more about the implications of this "Nonreponse" topic with you. I will send you a direct email, Regards Tyler
ReplyDelete