Don’t Quantify Online Qualitative Research

Filed Under: Best Practices, Market Research, Reporting, Tools & Techniques, Online Qualitative Research

Published:

Joy Boggio

By Joy Boggio, Director of Online Qualitative Support

We are adapting new technologies so fast that what was cutting edge last year, is passe this year. The Wondrous recently had a great post about technologies that will soon be obsolete. Think the TV remote or FAX machines.

This reminded me of the debate over text analytics and verbatim management for online qualitative studies. The various TA software packages (Language Logic, Clarabridge) are said to move us into the frontier of the future by “machining” the findings that we have culled from our boards. This all sounds promising, and many of us assumed that this would save a moderator/analyst time and uncover insights buried beneath the vast amount of data that we no longer “live through.”

It seemed, at first glance, to be a great solution when we realized gleefully, “Hey, we have so much data!” Then, in the next breath we realized, “OH! We have so much data..!” Unlike traditional qualitative research in which the moderators immerse themselves in the data as it is happening, the online qualitative moderator must sift through data that has been accumulating over days. We must find a way to juggle and make sense of it all to just find the nuggets of information.

But, how can we identify those nuggets quickly and efficiently? At C+R Research, we have a seemingly overwhelming amount of data, so we have made many attempts to “machine” and organize qualitative “data.”

At TMTRE, many others also talked about their attempts at automating and coding these responses. Most have come to the same conclusion we have… you simply have to read the comments from the boards. The data set, while appearing to be tremendous, is still too small to get good results from any automated method of sorting or coding it. Many have tried Language Logic to categorize and Nvivo to organize, but they both add time and almost always require a second analyst, which may cause the sub-text of the responses to be lost.

Automating the work does seem to have a place when you are dealing with multi-phase projects or when you are talking to a few hundred or more respondents, but not so with an average bulletin board of 20-30 people. What was the overall consensus? “It’s QUAL, we shouldn’t strive to quantify it!”

explore featured
Case studies

Hey, get our newsletter

join 5,000+ market research professionals
who “emerge smarter” with our insights