Qualitative researchers beware: How not to take interview data for granted

This is a method post summarizing applied linguist Steven Talmy’s (2010) well cited article on the interview as a social practice. In this commentary/opinion piece that also cites a lot of studies as examples, Talmy argues that we cannot take interview data for granted—that is, we should not collect and report it as a window on “truth,” whether that is factual truth or, if not factual truth, then at least the actual perception of the interviewee. He argues that interview data is co-created by the interviewer and interviewee in a collaboration, and attention to this collaborative process, including its effect on the data gathered, is important when conducting interview studies of quality.

Talmy, S. (2010). Qualitative interviews in applied linguistics: From research instrument to social practice. Annual Review of Applied Linguistics30, 128-148. https://doi.org/10.1017/S0267190510000085

Interviews are widely used in qualitative research. They appear in ethnographies and case studies as one of the data sources (apart from observations, artifacts, audio- or video-recordings, etc.). They appear in narrative and life history research; they appear in mixed-methods studies combining survey + interview data, or purely qualitative interview-based studies that employ “grounded theory” or another type of thematic analysis. We are all familiar with quotes being used as support for themes that “emerge” from the data.

And why should we be wary of this? Clearly, the interview is a go-to research method. Even in the age of COVID-19, there is little to no hindrance to “the utility, flexibility, and convenience of qualitative interviews for investigating an impressive array of matters” (p. 136). This type of data collection can go on largely as it used to before COVID-19, continuing to feed our publish-or-perish academic culture.

So what do we need to be careful about? Scholars like Block (2010), Johnson (1997) and Pavlenko (2007) have been arguing for 10, 15, or even 25 years not to take interview participants “at their word,” which I still see a lot in graduate student research (and sometimes even in my own!). What does it mean, to take participants at their word? Atkinson and Silverman (1997) call this “a spurious sense of stability, authenticity, and security” (as cited in Talmy, 2010, p. 130). We have a spurious, or false, sense of stability because we cannot assume that what the interviewee says during the interview is what they will say in another context or moment. We have a spurious sense of authenticity because we assume they are telling the truth, or at least sincerely reporting their perspective (biased though it may be). We have a spurious sense of security because we present data as if it is valid even with the previous concerns regarding stability and authenticity.

Talmy points out that even earlier researchers like Briggs (1986) and Holstein and Gubrium (1995) were aware that language is NOT a neutral representation of reality, whether that reality is objective (i.e., what exists, the real truth or facts) or subjective (i.e., it may not be factual, but at least it is what the person sincerely believes). In fact, language is NOT a direct representation of either of these. Language used in the interview does not directly correspond to “what ‘really’ happened or what participants ‘actually’ felt” (p. 131). Holstein and Gubrium (2003) point out that the content of the interview (the “what”) is influenced by the interaction between the interviewer and interviewee(s) (the “how”). Therefore, sociologists Holstein and Gubrium—like applied linguists Talmy and Pavlenko and others—urge us to pay attention to “the interactional [and] narrative procedures of knowledge production” (as cited in Talmy, 2010, p. 131).

Your interviewees are not passive vessels of answers, but working with them, you shape the content of the interview—adding here, reducing there, transforming facts and details, contributing to bias and distortion. Is this a problem? In Holstein and Gubrium’s view, it is NOT a problem but simply a fact of life: the interviewee “can hardly ‘spoil’ what he or she is, in effect subjectively creating” (as cited in Talmy, 2010, p. 131). What is needed, then, is simply attention to how this process of creation is occurring, rather than ignorance of it. Don’t just identify “themes” that emerge from your data and provide interview quotes to support them. Instead, “focus… as much on the assembly process as on what is assembled” (Holstein & Gubrium, as cited in Talmy, 2010, p. 131).

Four things to pay attention to

Firstly, don’t see the interview as “just a report”; peer reviewers will groan when you mention themes that “emerged” from the data… as if a hand reached up and grabbed your face, one of my former professors joked. At least two things come into play which must be accounted for—how the interviewer and interviewee choose to shape the conversation, and their pre-existing relationship. We must “give serious consideration to the ways in which the interviewer’s participation is significantly implicated in what the respondents end up saying and how they say it” (Wooffitt & Widdicombe, as cited in Talmy, p. 136, Talmy’s emphasis). Often, in interview studies, we just see quote after quote. What led to the person saying that? What came before and after? How come the interviewer is often invisible?

Talmy gives several examples of published interview studies by respected applied linguists (p. 137): a white person interviewing youth of colour, an adult whose first language is English interviewing ESL youth (Talmy’s own study), a teacher interviewing her/his/their students, university students being interviewed by their professor and practicum coordinator… “This is not to single out these studies as being unusual in any way; indeed, the neglect of the role of the researcher/interviewer in coconstructing interview data—whatever their relationship to the interviewee—is common across studies that conceive of interviews as a research instrument” (p. 137).

Therefore, the second point is to be honest about power relations. When you get teachers interviewing their students or “native” speakers of a language interviewing “non-native” speakers, or university researchers interviewing participants who do not have the same level of education, or even adults interviewing children/youth, you have inescapable relations of power: “who chooses what—and what not—to discuss; who asks what questions, when, and how; who is ratified to answer them (and who is not); who determines when to terminate a line of questioning; and so on. There are also other potentially important asymmetries that may be less directly observable but equally relevant, if not more so, ranging from differences in institutional status, age, language expertise, social class, and more” (p. 138).

And then there’s the writing—oh, the writing! Bauman and Briggs (1990) call the process of presenting interviewees’ quotes in our articles “entextualization”: “the process of rendering discourse extractable, of making a stretch of linguistic production into a unit—a text—that can be lifted out of its interactional setting” (p. 73) and placed into another context (i.e., cut out of the interview and pasted into the research article). There is also power involved in this: how data is entextualized, decontextualized (losing the context of the interview) and recontextualized (i.e., to suit the researcher’s argument)… yet research participants’ statements are often presented “as stand-alone quotes of ‘what participants think’” (Talmy, 2010, p. 138).

Third, be careful when claiming that you “gave voice” to participants, as this also makes peer reviewers groan. This assumes that a person speaks with a stable individual voice which can, in the right circumstances (i.e., establishing trust) be brought out. That, of course, is a lie. There is no unified, coherent, essential self. Even if you, as a researcher, aim to get multiple participants (voices) to represent a variety of perspectives about an issue, in fact EACH participant has a variety of conflicting voices and perspectives in him/her/them that can vary from occasion to occasion. It is therefore important to acknowledge what voice is brought out in that participant on that occasion, how, and why.

Fourth, be circumspect in your data analysis. Talmy says analysis of interviews can vary in quality, from just presenting what participants say (not recommended), to summarizing/analyzing it (but not questioning it), to “sophisticated thematic analyses” (p. 138) that reflect on HOW we got this data as well as the WHAT the data is. And when we analyze the WHAT, we should try to come up with points that are new and insightful, “themes that are important… but [that] may not have been reflected in previous scholarship” (Pavlenko, as cited in Talmy, 2010, p. 139). Examples include King’s (2008) discussion of heteronormativity in applied linguistics scholarship and Menard-Warwick’s (2005) memorable life histories of two women, Brenda and Serafina. Still, however, research that focus on the WHAT has five weaknesses (Pavlenko, 2007, p. 166-167):

  • It is often unclear where themes come from and how they relate to each other.
  • We need clear procedures for linking examples to categories; that is, examples are sometimes not strongly linked to the themes they were used to support.
  • There is an overreliance on themes that repeat (i.e., participants mention them again and again), so we just end up focusing on the most common themes rather than the most important ones.
  • We need to use textual proof, so we focus on what is in the text, when what is NOT said can be even more informative.
  • There is lack of attention to the ways people “use language to interpret experiences and position themselves as particular kinds of people” (i.e., the HOW and not just the WHAT).

In other words, we just get a “laundry list” of themes that fail to make connections between different parts of the interview transcript, essentialize/oversimplify particular messages, or do not capture the bigger picture. Talmy adds a sixth weakness (p. 139): often, researchers talk about how their interview studies are “poststructuralist” or “social constructionist” and describe interviews as a “social practice,” and yet when they report data, they just report on it as a laundry list of positivist facts. They become invisible as interviewers, and say little to nothing about social conditions or HOW the interview data was produced.

Summary

What, then, is to be done? Interviewers need to analyze and report on the “how,” from the “micro” level (i.e., present longer stretches of the interview that INCLUDE what the interviewer said, analyzing how the interview unfolded). How did people come to say what they came to say in the sequential organization of talk? What earlier thing in the interview did that thing relate back to? There’s also “macro” concerns… like how was the interview structured as a whole? [Holstein & Gubrium (2003) well discuss how the interviewer can shape this structure productively and then be honest about how they did so; I have my students read that article to inform their own interviewing.]

Talmy provides several examples of studies that have the kind of critical reflection he is advocating for. Liebscher and Dailey-O’Cain (2009) studied how western Germans who moved to Saxony (in East Germany) and German immigrants to Canada talked about the stigmatized or devalued “Saxon dialect.” They investigated not only what they said about the dialect, but how it was “interactionally worked up” in focus group interviews (p. 141). However, the Germans in Canada didn’t seem to notice this dialect as much, and focused more on projecting unity through the German language, regardless of dialect [which could have been because they were less aware of specific dialects in Germany]. Talmy states: “As a result, the interviews themselves became central analytic sites, where participants not only talked about language attitudes but also produced them with one another” (p. 141). [Another example of a study I know that is similar is Sandhu’s (2015) study of how women in India styled different dialects of English as a performance for Sandhu (the interviewer).]

Another piece that Talmy praises is Prior’s (2011) study of Trang, a multilingual, multiethnic adult immigrant to Canada who spoke English as an additional language. Trang told two versions of a frustrating bank experience, and Prior compared the two different versions of the story, “characterizing them not as an indication of inconsistency but as evidence that the tellings served substantially different rhetorical purposes in the different contextual circumstances of the two interviews” (Talmy, 2010, p. 142). Talmy (who I think was Prior’s classmate in graduate school) explains: “The analysis is, once again, fundamentally reflexive, as it accounts not only for the ‘content’ of the two versions of the bank narrative but also the interactional and interpersonal circumstances of their local production” (p. 142).

To conclude his article, Talmy states that we don’t always need to conceptualize interviews as a social practice—we can still use them as research instruments—but still, it is beneficial to think about such issues. If they go untheorized, data collection and analysis become less valid and even less ethical. We need to consider our roles in influencing participants’ answers, how interviewees “perform” for us, the extent to which we can trust their statements as stable facts, how we analyze and represent what they say, etc.

References

Atkinson, P., & Silverman, D. (1997). Kundera’s Immortality: The interview society and the invention of the self. Qualitative Inquiry, 3(3), 304–325. ttps://doi.org/10.1177/107780049700300304

Bauman, R., & Briggs, C. (1990). Poetics and performance as critical perspectives on language and social life. Annual Review of Anthropology, 19, 59–88.

Block, D. (2000). Problematizing interview data: Voices in the mind’s machine? TESOL Quarterly, 34(4), 757–763. https://doi.org/10.2307/3587788

Briggs, C. (1986). Learning how to ask: A sociolinguistic appraisal of the role of the interview in social science research. Cambridge, U.K.: Cambridge University Press.

Holstein, J. A., & Gubrium, J. F. (1995). The active interview. Thousand Oaks, CA: Sage.

Holstein, J. A., & Gubrium, J. F. (2003). Active interviewing. In J. F. Gubrium & J. A. Holstein (Eds.), Postmodern interviewing (pp. 67–80). Thousand Oaks, CA: Sage.

Johnston, B. (1997). Do ESL teachers have careers? TESOL Quarterly, 31(4), 681–712. https://doi.org/10.2307/3587756

King, B. W. (2008). “Being gay guy, that is the advantage”: Queer Korean language learning and identity construction. Journal of Language, Identity, & Education, 7(3-4), 1–33. https://doi.org/10.1080/15348450802237855

Liebscher, G., & Dailey-O’Cain, J. (2009). Language attitudes in interaction. Journal of Sociolinguistics, 13(2), 195–222. https://doi.org/10.1111/j.1467-9841.2009.00404.x

Menard-Warwick, J. (2005). Intergenerational trajectories and sociopolitical context: Latina immigrants in adult ESL. TESOL Quarterly, 39(2), 165–185. https://doi.org/10.2307/3588307

Pavlenko, A. (2007). Autobiographic narratives as data in applied linguistics. Applied Linguistics, 28(2), 163–188. https://doi.org/10.1093/applin/amm008

Prior, M. T. (2011). Self-presentation in L2 interview talk: Narrative versions, accountability, and emotionality. Applied Linguistics32(1), 60-76. https://doi.org/10.1093/applin/amq033

Sandhu, P. (2015). Stylizing voices, stances, and identities related to medium of education in India. Multilingua34(2), 211-235. https://doi.org/10.1515/multi-2014-1012

Published by annamend

Assistant Professor in the Faculty of Education, University of Hong Kong

One thought on “Qualitative researchers beware: How not to take interview data for granted

Comments are closed.

%d bloggers like this: