Word Frequencies and Word Clouds

For many qualitative researchers, looking at the specific word choices used by your research participants offers an important first step in exploring your data. You listen to or read your data over and over; you become immersed in it, trying to identify themes and connections. But it can also be very beneficial to simply create a list of the words they are using, count how often certain terms are being used, and examine the contexts in which those words appear. This seemingly simple exercise can allow the researcher to note themes worthy of further exploration, by allowing the language of your data to guide your interpretation.

However, sorting and counting individual words by hand or with a spreadsheet or word file can require hours of tedious work. In Transana, the Word Frequency Report allows the researcher to take text or transcribed data, and easily generate a list of words used in your Documents, Transcripts, Quotes, or Clips, along with the number of times each word is used.

The following examples represent data from questions asked in the U.S. presidential debates in 2008, 2012, and 2016.

In addition to being organized in a list, you can generate a Word Cloud to present your Word Frequency Report data in a graphical format, with the frequency of each word represented by its relative size.

Supplemental uses of video research data and truly informed consent

Recording an interview

Video data is inherently different from most other forms of data. Video can capture tone, accent, inflection, pauses, facial expression, body language, and other observable, potentially interpretable and analyzable aspects of human behavior. It allows us to come closer to capturing the reality we observe for detailed study and multi-layered analysis than other forms of data collection.

The qualities that make video data so significant as data also make it very compelling to share with others when presenting our research. Video clips make direct, illustrative qualitative quotes during presentations at conferences, as well as on web sites and in other media-based venues for disseminating research results and findings, such as documentary films or television shows. The more visual the medium we use to share our research findings, the more effective video is for providing evidence of our trustworthiness and the validity of our conclusions.

Video data captures research participants in a way that makes the anonymization of data nearly impossible. Faces and voices are usually readily recognizable. This places an ethical burden on the researcher who collects video data and wants to use that data beyond analysis for their immediate research project. Fortunately, there are a couple of relatively easy steps a researcher can take to simultaneously protect the privacy and dignity of research participants and maximize the usability of the video data that they collect.

Choosing between Transana Basic and Transana Professional

Choosing between Basic and Professional

A recent support question:

I am a PhD student looking for an alternative to NVivo or Atlas.ti.
In an upcoming project I will analyze a huge amount of
(approximately 500 clips) of youtube videos and was wondering,
which version of Transana I need – also taking into consideration
that I barely have funding for technical devices (e.g. software…) at
the moment.

Your main choice is between the Basic version (US$150, US$75 for current students) and the Professional version (US$350). I recognize the Professional is significantly more expensive, but I believe it provides excellent value for that additional money.

Both the Basic and Professional versions of Transana will allow you to organize and analyze your data set of 500 or so YouTube videos. They both offer the ability to make analytic selections from your video files, and to categorize and code these selections. That’s the main power of your analysis, and Transana will allow you to do that better than Nvivo or Atlas ti would. Given your data set, Transana is the right choice.

But let me talk a little about what you can do in the Professional version that you can’t do in the Basic version to help you make your decision about which Transana version to buy.

Does Using Software Cause Shallow Analysis?

I can’t believe this is still a question in 2016, but apparently it is.  A few months ago, I joined a conversation on ResearchGate.com:

The original question:

Can you recommend a software for analyzing qualitative data (interview transcripts)?

A colleague of mine and I collected 28 interviews and transcribed them for a qualitative content analysis. Before starting with the content analysis manually, I wondered if you have used a software in the past that you can recommend in terms of usability, comprehensibility of the analysis, visualization and costs? If so, which one do you suggest and why?

To which someone responded:

Sorry to read that the siren call of these programs is straying you from the in-depth analytic nature of Qualitative Research. … 🙂

Are Transcripts required in Transana?

A recent support interaction:

I am a PhD student interested in purchasing Transana, and I have a specific question about the coding of the data.

I have videos of people interacting and I want to look at the attitudes as much as at what they are saying (to do both conversation analysis and thematic analysis). For some videos I want to work only on the attitudes. I want to know if I can code the videos directly or if Transana requires a transcript to do the coding.

Transcripts are not required for working with media data in Transana. Go to the ScreenCasts page of the website at https://www.transana.com/learn/screencasts/ and look for “Creating Clips Without Transcripts.”

That being said, though, you still might want to create “attitude” transcripts along with or instead of any verbal, conversation analytic, or other transcripts you create for some of your media files. The Professional and Multiuser versions of Transana support multiple simultaneous transcripts linked to the media files, and this allows for some very powerful analytic processes.

From my perspective, the media file is your primary data, and a transcript is a useful abstract representation of that data whose main purpose is to allow you to locate the portions of the media file that are analytically important or interesting. I see a transcript as being a map to your data. Just as you can have multiple types of geographical maps (political, topographical, agricultural, etc.), having multiple maps to your media data (verbal, gestural, conversation-analytic, attitudinal, visual, etc.) can be very useful too, especially when these maps can be overlaid upon each other through Transana Professional’s multiple simultaneous transcript functionality.