It’s Not Just About Convenience: Multimodality and Transmodality in the FYC Classroom
by Tara Salvati | Xchanges 19.2, Fall 2025
Contents
Multimodality and Transmodality in the Classroom
Equity, Time Management, and the Graduate Teaching Assistant
Affordances and Constraints of Multimodality and Transmodality
Defining Terms
By calling for greater use of multimodality and transmodality in the first-year composition classroom, it is important to understand the terms fully before making the jump. For some instructors, these terms and ways of approaching the first-year composition classroom feel instinctual, but for others, especially new instructors, understanding these concepts before trying to execute them in class is necessary.
Mode and Multimodality
A mode is how information is presented. This can come in multiple forms, or “modes,” but the most traditional mode, of course, is through written text. However, as technology has advanced, different modes have been created and used in the classroom, creating space for multimodality. Multimodality has many different affordances in the classroom, and in some cases, can help keep students engaged, as it is usually seen as the professor “mixing things up.” Something such as having students watch a video for homework instead of reading a chapter for homework can be an example of this. It is not just restricted to outside of the classroom, but inside of it as well, whether having students draw something as a warm-up or using music to help them connect concepts. It is important to engage with technologies as they develop and evolve, as it keeps us connected to students and their experiences—otherwise we risk being left behind. As Kathleen Blake Yancey writes, “Our daily communicative, social, and intellectual practices are screen-permeated. Further…the screen is the language of the vernacular, that if we do not include it in the school curriculum, we will become…irrelevant” (Yancey 305). If instructors are unwilling to meet the students where they are, we risk being left behind and failing our students.
Despite how we understand multimodality in our current moment, it is not a new concept. Jason Palmeri’s book Remixing Composition: A History of Multimodal Writing Pedagogy traces the known and now forgotten history of multimodality in classroom settings. By engaging with the history of multimodality, Palmeri is able to “demonstrate the unique disciplinary heritage that compositionists bring to the study and teaching of multimodal composing…” (Palmeri 7). Though the ease of new technologies has made multimodality easier in recent years, it is worth noting that Palmeri describes multimodal texts from decades before the rise of the Internet (Palmeri 5-7).
Multimodality as we know it now was first introduced to higher education in the 1990s, alongside the introduction of the internet and students having laptop computers in classroom spaces (Almumen 748). However, over time, it has become an invaluable tool for college professors, as it can be used in multiple contexts to connect to many different types of students. This means that it is all the more important for professors of first-year composition to be literate in these different modalities. Huda Almunen argues that when teachers of any level are trained, they must engage with students interactively, which essentially captures and maintains students’ attention (Almunen 749). It can be argued that “…multimodality enables college students to apply content learned in class, analyze their actions, report on and reflect on their experiences, deriving what they best could learn from them, and how they could shape and develop their skills” (Almumen 749). This essentially means that the way instructors teach has changed. In the first-year composition classroom, an instructor may assign a multimodal artifact, such as a video for homework, but inside the classroom, must contextualize what their students have watched to meet the goals of the first-year composition classroom.
Transmodality
In many ways, transmodality goes beyond the scope of multimodality, in that it allows students the agency to decide how they want to learn or absorb information. Kate Artz, Danah Hashem, and Anne Mooney describe transmodality as “referring to translating the primary modes of expression in a text into new and different modes while maintaining the essential meaning of the original text” (para. 1). This means that a transmodal text takes one work and puts it into different modes, all containing the same information. Some news outlets have begun to implement these ideas in small ways. In an online published article, viewers may have opportunities to listen to an audio transcription or a podcast discussing the article. This would benefit individuals who are auditory learners, individuals who have reading comprehension issues, or the visually impaired. Others may utilize short videos, which would likely benefit people on the move who only have a short amount of time to digest information, or people who are visual learners.
The term “transmodality” is fairly new, but this progression is being incorporated into everyday life without people knowing the name of what they are doing. Margret R. Hawkins, a professor whose research focuses on applied linguistics, is interested in the “trans” turn in language. In particular, she points out the rapid globalization of technology: “…human communication entails the coordination and interpretation of a vast array of semiotic resources that are entangled with language in fluid and unpredictable ways” (Hawkins 55-56). In fact, this idea that transmodality stems from globalization makes perfect sense for non-native speakers in the English classroom. If students are assigned pages of reading for homework and have a harder time understanding written work, but are confident in their ability to listen, comprehend, and engage in conversation, then having a transmodal text with an auditory function would benefit their understanding of the material. Additionally, it would likely take these students less time to complete the assignment; essentially, it would double the benefits of a transmodal text.
It is worth noting that transmodality differs from Jay David Bolter and Richard Grusin’s 1999 ideas of remediation. Remediation, as defined by Bolter and Grusin, is a double logic, in which “Our culture wants both to multiply its media and to erase all traces of mediation: ideally, it wants to erase its media in the very act of multiplying them” (Bolter and Grusin 5). This theory is built by emphasizing the immediacy of media and information. Along with the immediacy of media, remediation also has to do with embodiment and an audience’s desire to experience or embody the thing being represented, with a common example being the utilization of virtual reality (VR) headsets (Bolter and Grusin 5-6). While remediation is certainly worthy of our study and attention, it can be seen more as a precursor to the idea of transmodality. While Bolter and Grusin focused on hypermediacy and the speed at which things can be added or adapted to a new medium, transmodality considers all (or many) different media sources simultaneously and allows users to choose their preference. In contrast, remediation allows the current culture and society to dictate what modality we should prefer. We can understand remediation as seeking the next new modality, whereas transmodality can be seen as having the option to choose.
Download PDF