"Differences in Print and Screen Reading in Graduate Students"
Download PDF About the AuthorLauren J. Short is a PhD candidate in Composition at the University of New Hampshire. Her research interests include religious rhetorics, feminist rhetorics, and digital literacy pedagogy. Contents |
MethodsParticipantsSix graduate students participated in my data collection process. One participant is a PhD student in the Composition and Rhetoric program (Claudia), one is an MA candidate in Linguistics (Gertrude), one is an MFA in Fiction (Phil), one is a PhD student in Economics (Courtney), one is a PhD candidate in Economics (David), and one is a PhD candidate in Natural Resources and Earth Systems Science (Amanda). My respondent from the MA program in Linguistics is also a multilingual speaker. Four respondents were female and two were male. All respondents were Caucasian and native speakers of English, with the exception of Gertrude. Participants were allowed to choose whether to be referred to by their first name or by a pseudonym. ProcedureParticipants were asked a series of 12 interview questions (Appendix A) about their reading strategies on print and on screen. Since the term “strategy” is somewhat vague, I provided participants with a list of common reading strategies before they began the study. The purpose of this list was meant for participants and myself to come to a clearer understanding of what “reading strategies” can look like. Participants were able to use the list of strategies as somewhat of a jumping off point to speak to their own experiences, while also disclosing other strategies that did not occur on the list. This list includes:
The list of strategies here is not meant to be exhaustive, but more to allow participants and me to come to a common understanding of what a reading strategy looks like. The interview process generally took about 15 minutes per person. Claudia, Gertrude, and Phil were interviewed in person, while I recorded their responses, and later transcribed the material collected. Courtney, David, and Amanda were distributed the same interview questions in a digital word processing document and asked to type their responses to those questions directly. The reasoning behind the change in procedure had to do with a perceived sense of ease for graduate students to participate in and respond to my study on their own time and from the comfort of their desired locations, instead of having to seek a time and place with me in person. There was also considerably less labor in not having to transcribe participant responses. Analytical MethodsI first employed in vivo coding (Saldaña, 2009), which led to a final discourse structure and analysis (Gee, 1999). In vivo coding is “the practice of assigning a label to a section of data, such as an interview transcript, using a word or short phrase taken from that section of data” (Given, 2008). The aim of in vivo coding is to stay as close as possible to the participants’ own words. These methods were useful to me because the in vivo coding led me to connect patterns between linking concepts that interview participants identified in their responses. From these initial in vivo codes, I was able to narrow the focus of the concepts I chose to look at in this study, which can be found in Table 1. Furthermore, discourse analysis led me to draw conclusions that weren’t explicitly stated in respondents’ words. Through this method, I was able to infer meaning when appropriate. |