"Student Perceptions of Writing Instruction: Twitter as a Tool for Pedagogical Growth"
About the AuthorsSarah Lonelodge is a PhD candidate in the Rhetoric and Writing Studies program at Oklahoma State University. She also serves as an assistant director of the first-year composition program and as president of OSU’s chapter of the Rhetoric Society of America. Her research interests include composition pedagogy and religious rhetoric. Katie Rieger is a PhD candidate at Oklahoma State University in the Rhetoric and Writing Studies program and is an Assistant Professor of English at Benedictine College. Her research interests include student-centered pedagogy; educational technology for distance learning; writing center studies; and the intersection of intercultural communication and technical writing pedagogy. ContentsIntroduction and Literature Review Research Design and Methodology |
Data AnalysisThe process of coding was multifaceted and recursive, as is expected with grounded theory (Birks & Mills, 2015; Charmaz, 2014; Glaser & Strauss, 1999). In the following sections, we discuss our use of open coding, attitudinal coding, and writing process coding. During this process, we first coded items individually and then discussed and adjusted our codes as we reflected on the data and in order to reach 100% interrater reliability. These conversations were vital--especially when examining how humor/sarcasm and/or media such as gifs and emojis were used in the tweets. Open CodingInitial data collection resulted in 306 tweets of which 19 were excluded during the coding process for one of three reasons:
The remaining 287 tweets were labeled with approximately 512 unique codes during the open-coding process. The most common codes were “Professor says” and “Student says,” which were associated with a specified indication of communication. Other codes such as “Feedback,” “Grades,” and “Comments,” for example, were used when the tweet indicated the professor had reviewed the student’s writing. Table 1 provides a sample from the open coding process.
Attitudinal CodingFollowing open coding, the tweets were analyzed based on attitude (POSITIVE or NEGATIVE) in order to determine students’ perceptions and reactions to specific issues raised in the tweets. Attitude was determined by the tone, content or point being made, and media. As with the open coding process, we discussed each of our codes to reach consensus about the holistic meaning of each tweet. The process of attaching even the broad terms “POSITIVE” and “NEGATIVE” proved difficult, which is why we avoided using more specific language. Chen, Vorvoreanu, and Madhaven (2014) used a similar method in analyzing engineering students’ tweets. Social media data mining has been used to learn more about student perceptions in the past (Patil & Kulkarni, 2018; Beth Dietz-Uhler & Janet E. Hurn, 2013; Shen & Kuo, 2015), but as this data collection method becomes more prevalent as a way to learn about these perceptions, we argue (like many of the cited scholars) that we should leverage these data in ways that can enhance our pedagogy. Additionally, when using social media data mining and coding, we found simply counting word choices would not suffice as a method of coding, which aligns with Chen, Vorvoreanu, and Madhaven’s (2014) findings. For example, many of the tweets coded as NEGATIVE used words and phrases that would likely be associated with a positive attitude, such as “I like it how,” “laughing,” “hahahaha.” Read holistically, however, these tweets indicated that the student felt angry, frustrated, worried, or another negative attitude and used sarcasm to convey that feeling. This difference may relate to tweets themselves, which often use sarcasm and humor to denote a negative attitude. Likewise, many of the POSITIVE tweets included negative word choices and phrases that could easily be associated with a negative attitude such as “not sleeping,” “you didn’t follow the prompt,” and, in one about group work, “I’m writing all of it.” However, the use of emojis and/or the larger point of the tweet indicated a generally positive attitude. Examining students’ attitudes toward specific pedagogical practices allowed us to better determine implications for teaching writing. Examples of both positive and negative tweets are provided in Table 2.
Timeline CodingFinally, in order to analyze the correlation between the attitude, professor, and stage in writing, we coded the tweets based on the point where the student seemed to be in the writing timeline, which refers to the broad stages of BEFORE, DURING, AFTER, or NOT writing. This coding was based on the tweet’s tense and content. Table 3 provides examples of the timeline coding. Timeline coding chiefly involved analyzing word choices and tenses:
Each code was carefully analyzed to determine the general point in the writing process, but misinterpretations are certainly possible due to the imprecise nature of language. For example, we determined that the difference between NOT and AFTER was in language indicating avoidance of writing rather than completion. Although some tweets coded as NOT indicated that an assignment had been given but not yet turned in, which could be coded as DURING, the tweet indicated that the student was intentionally doing an alternative activity or otherwise avoiding writing. With each tweet coded for content, attitude, and timeline, we began axial and selective coding, which we used to “deconstruct the data into manageable chunks in order to facilitate an understanding of the phenomenon in question” (Cohen, Manion, & Morrison, 2011, p. 600). In other words, tweets were first sorted based on the point in the writing timeline (BEFORE, DURING, AFTER, and NOT). Within each of these four lists, we then coded the tweets, based on the attitude, as positive or negative. Because we conducted these processes together at each stage of analysis, we achieved 100% interrater reliability. The themes within the subcategories are discussed in the following section. |