• Contact

  • Home
  • Archives
  • About
  • Staff
  • Resources
  • Submissions
  • CFP
  • Contact

"Differences in Print and Screen Reading in Graduate Students"

Download PDF

About the Author

Lauren J. Short is a PhD candidate in Composition at the University of New Hampshire. Her research interests include religious rhetorics, feminist rhetorics, and digital literacy pedagogy.

Contents

Introduction

Theory

Study Aims and Introduction to Research Framework

Methods

Results

Discussion

Conclusion

References

Appendix: Interview Questions

Methods

Participants

Six graduate students participated in my data collection process. One participant is a PhD student in the Composition and Rhetoric program (Claudia), one is an MA candidate in Linguistics (Gertrude), one is an MFA in Fiction (Phil), one is a PhD student in Economics (Courtney), one is a PhD candidate in Economics (David), and one is a PhD candidate in Natural Resources and Earth Systems Science (Amanda). My respondent from the MA program in Linguistics is also a multilingual speaker. Four respondents were female and two were male. All respondents were Caucasian and native speakers of English, with the exception of Gertrude. Participants were allowed to choose whether to be referred to by their first name or by a pseudonym.

Procedure

Participants were asked a series of 12 interview questions (Appendix A) about their reading strategies on print and on screen. Since the term “strategy” is somewhat vague, I provided participants with a list of common reading strategies before they began the study. The purpose of this list was meant for participants and myself to come to a clearer understanding of what “reading strategies” can look like. Participants were able to use the list of strategies as somewhat of a jumping off point to speak to their own experiences, while also disclosing other strategies that did not occur on the list. This list includes: 

  • underlining and/or highlighting portions of text
  • taking margin notes
  • creating annotated bibliographies
  • taking notes in separate locations
  • using sticky notes
  • glancing through the table of contents
  • reading through headings
  • identifying the thesis/main points
  • using symbols as markers of important points (like stars)
  • creating indexes
  • using apps like Notability or iAnnotate
  • choice of screen to read upon when reading digitally (computer, tablet)

The list of strategies here is not meant to be exhaustive, but more to allow participants and me to come to a common understanding of what a reading strategy looks like.

The interview process generally took about 15 minutes per person. Claudia, Gertrude, and Phil were interviewed in person, while I recorded their responses, and later transcribed the material collected. Courtney, David, and Amanda were distributed the same interview questions in a digital word processing document and asked to type their responses to those questions directly. The reasoning behind the change in procedure had to do with a perceived sense of ease for graduate students to participate in and respond to my study on their own time and from the comfort of their desired locations, instead of having to seek a time and place with me in person. There was also considerably less labor in not having to transcribe participant responses.

Analytical Methods

I first employed in vivo coding (Saldaña, 2009), which led to a final discourse structure and analysis (Gee, 1999). In vivo coding is “the practice of assigning a label to a section of data, such as an interview transcript, using a word or short phrase taken from that section of data” (Given, 2008). The aim of in vivo coding is to stay as close as possible to the participants’ own words. These methods were useful to me because the in vivo coding led me to connect patterns between linking concepts that interview participants identified in their responses. From these initial in vivo codes, I was able to narrow the focus of the concepts I chose to look at in this study, which can be found in Table 1. Furthermore, discourse analysis led me to draw conclusions that weren’t explicitly stated in respondents’ words. Through this method, I was able to infer meaning when appropriate. 

Pages: 1· 2· 3· 4· 5· 6· 7· 8· 9

Posted by xcheditor on May 18, 2021 in article, Issue 14.1

Related posts

  • Welcome to Issue 14.1 of Xchanges!
  • "User Experiences of Spanish-Speaking Latinos with the Frontier Behavioral Health Website"
  • "What Wants to be Said (Out Loud)?: Octalogs as Alter/native to Hegemonic Discourse Practices"
  • "Building Critical Decolonial Digital Archives: Recognizing Complexities to Reimagine Possibilities"
  • "Profiles in Digital Scholarship & Publishing: Douglas Eyman"

© by Xchanges • ISSN: 1558-6456 • Powered by B2Evolution

Cookies are required to enable core site functionality.