Why use Multiple Simultaneous Transcripts?

Multiple Simultaneous Transcripts can be an extremely powerful tool in qualitative analysis. They allow researchers to embody multiple perspectives related to their data, one in each Transcript. They can then compare and contrast them easily, allowing for a more complex, detailed, nuanced understanding of their data.

Here are a few examples of ways researchers have used multiple simultaneous transcripts in Transana:

  • Analytic Lenses – Researchers create different transcripts to represent different analytic views of their data. For example, Erica Halverson and her team were studying student-produced documentary films. They created transcripts to represent a director’s viewpoint, a sound-editor’s viewpoint, a video editor’s viewpoint, and a cinematographer’s viewpoint for each documentary they studied. This helped them document the complexity and sophistication that students brought to their work.
  • Descriptive Transcripts – Researchers develop transcription systems to capture a variety of non-verbal layers from video data, including gesture, movement, facial expression, and video screen-shots. These transcripts, used in parallel with verbatim transcripts, help bring important non-verbal elements of the data into the analysis.
  • Language Acquisition and Use – When data contains multiple languages, it can be helpful to create translation transcripts to supplement a main transcript where the spoken languages are represented. This helps to make data more accessible to researchers who may not be fluent in all languages represented in the data.
  • Really Dense Data – Some data defies representation in a single transcript because there is too much going on at once. Sometimes, the best way to make sense of really dense, complex, challenging data is to break it into manageable parts. Some examples might help.
    • Multi-camera classroom video can capture a tremendous amount of detail. Sometimes, multiple transcript help make overlapping events more accessible to the research team for analysis.
    • The 2020 U.S. Presidential Debates were, to put it politely, chaotic. There were constant interruptions and an unprecedented (dare I say unpresidented?) amount of overlapping speech. It was nearly impossible to represent the talk in these debates using a single transcript as a result. Making sense of this data because much simpler when I created three simultaneous transcripts, one for the moderator and one for each of the candidates. See the Advanced Timecodes Tutorial for a ScreenCast.
    • Video game play data can be fast-paced and dense. To capture data from a massive multi-player first-person shooter game for a single participant, we captured two video feeds; we collected screen-capture video with incoming audio that included on-line chat for all game players except our observed participant, and we captured over-the-shoulder video of our participant interacting with their computer where the audio track included their contribution to the online chat as well as talk-aloud description of their in-game choices and some discussion with the researcher about what was occurring. Since the audio tracks of these two video streams sometimes diverged, we created separate transcripts for each. We also created a “game play description” transcript that captured what was occurring in the game in detail, which we time-coded with single-frame accuracy, which allowed us to time critical in-game events with 1/30th of a second precision. This third transcript helped provide critical insight into what was occurring in this extremely fast-paced game.