
Does Using Software Cause Shallow Analysis?
I can’t believe this is still a question in the 2010s, but apparently it is. A few months ago, I joined a conversation on ResearchGate.com:
The original question:
Can you recommend a software for analyzing qualitative data (interview transcripts)?
A colleague of mine and I collected 28 interviews and transcribed them for a qualitative content analysis. Before starting with the content analysis manually, I wondered if you have used a software in the past that you can recommend in terms of usability, comprehensibility of the analysis, visualization and costs? If so, which one do you suggest and why?
To which someone responded:
Sorry to read that the siren call of these programs is straying you from the in-depth analytic nature of Qualitative Research. … 🙂
I posted a response, edited here for clarity and expanded slightly.
(Responder), you are asserting a false dichotomy. You can do in- depth analysis using software. The software is just a tool. Highlighters are a tool too, as are paper and scissors. Do you eschew their use? Do you argue that word processors reduce the quality and thoughtfulness of writing compared to paper and typewriters? (I actually heard that argument when home word processors first came out in the early 1980s.)
The software doesn’t DO the analysis, nor does it inherently discourage thinking. It just gives you tools for organizing and categorizing your data. Sure, people can do poor analysis without quality thinking in software, but they can do that with manual methods too.
I’ve also done some thoughtful analyses in Transana that would have been nearly impossible (or at best much more difficult and time-consuming) with manual methods. I know many researchers who’ve been able to do better research because of specific features provided by software that would be very hard or impossible to do manually. (For example, analyses involving multiple-camera data collection or multiple transcript analysis for analyzing the inter-relationship between multiple analytic layers within media data.)
Try this: take two interviews of equal length. Do one with manual methods. Do the other with Transana, linking the transcript to the original media file and then using Clips in Collections as your analytic tool within the software. (It’s the tool most directly analogous to manual methods.) I believe you will find that your analysis will be of equal quality.
For the first interview, you’ll probably have to spend a little time getting used to the software, learning its tools and terminology. (You had to learn manual methods, and it felt a little awkward at the beginning too, but you may not remember that.)
But when you add the next 27 interviews in, you’ll find that it’s difficult to manage that much data with manual methods, but really quite easy with software. With software, you can shift from “what does this interview contribute to my analysis overall?” to “what interviews contribute to this analytic concept?” easily, while manual methods doesn’t really let you look at your data from both directions. With software, it’s two clicks to hear the underlying audio from your participant to check for sarcasm that would most likely be lost in manual methods.
There’s a lot more you can do using software, of course, but you get the point. The point is that software is a tool, and like any tool, can be used well or used poorly. The content of your analysis is what is important, what tools you use to get there less so. Software hopefully can make that analysis a bit easier, allow you to manage a bit more data and a bit more complexity. Maybe in some cases, software can help an analysis be a little more sophisticated. But I absolutely reject the idea that doing qualitative analysis using manual methods is in any way morally superior or somehow makes one’s analysis deeper or more worthy.
I wonder if anyone’s ever done that study.