Chat(GPT)-ing about the Affordances Generative AI Tools Offer for ADHD Writers
by Alex Jennings | Xchanges 19.1, Spring 2025
Contents
Following Conversations: To Chat(GPT) or not to Chat(GPT)
Following Conversations: To Chat(GPT) or not to Chat(GPT)
Like many Composition instructors, I’ve found myself in the middle of various conversations about the use of genAI in our writing classrooms. During a series of interdisciplinary generative AI workshops I was invited to attend at University of Pittsburgh between the 2023 and 2024 academic year, conversations quickly emerged about the degree to which ChatGPT should or should not be welcomed into our classrooms and teaching practices. Attitudes ranged from total ban, to somewhat restricted use, to a more enthusiastic welcoming of such tools, but much uncertainty about procedure and ethics lived on. Some instructors feared that permitting the use of ChatGPT, whether in their own instruction or for student use, would mean an open endorsement of cheating, plagiarism, and inauthentic work. Conversations about navigating the current genAI landscape are being amplified in the field across various Writing Studies flagship conferences and journals. The 2024 Conference on College Composition and Communication (CCCC) hosted dozens of panels about genAI, and WAC Clearinghouse, an open-access peer reviewed Writing Studies publishing forum, published published “TextGenEd:Teaching with Text Generation Technologies,” which is a collection of teaching activities and resources related to genAI technologies.
The work being carried out in the field to date suggests that folks are already experimenting with ways they can integrate genAI into their writing classrooms. However, this shouldn’t signal a carefree application of genAI where any consideration of ethics and regulatory use are absent. LLMs like ChatGPT are good at producing language, but understanding how LLMs operate in order to articulate the distinctions between human interpretation and meaning making, and machine-produced language, is important. As Byrd cautions, LLMs, while seeming to be effective, “have really created mathematical formulas to predict the next token in a string of words from analyzing patterns of human language. They learn a form of language, but do not understand the implicit meaning behind it” (136). ChatGPT doesn’t “understand” language in the way that we as humans do. It is using a very large corpus of data to essentially predict sequentially what comes next, but we as humans can “make productive use of uncertainty” (Vee 2) and engage in critical inquiry by asking and answering questions informed by students’ own knowledge and lived experiences, something an LLM can’t do.
It’s unsurprising that we are starting to see institutions adjust their policies because it is clear that AI is here to stay (Morgan). However, while advocacy for the use of genAI tools in Writing classrooms is growing, there are nuanced considerations that span beyond those who believe embracing genAI will “damage student learning by shortcutting the writing process” (Sano-Franchini et al). Some scholars are thinking about risks associated with genAI use like upholding white supremacy and white dominant literacy practices (Bender et al.; Byrd; Sano-Franchini et al.), perpetuating environmental racism and the lack of regulations in place for the big tech companies that own them (Bender et al.; Sano-Franchini et al.). Like any pedagogical initiative we choose to fold into our instruction, there is a degree of responsibility that exists when it comes to the resources we choose to utilize, and we should treat genAI technologies no differently.
Jennifer Sano-Franchini, Megan McIntyre, and Maggie Fernandes, the authors of “Refusing GenAI in Writing Studies: a QuickStart Guide,” offer a framework for instructors to make discretionary decisions about using genAI, grounding the rationale for degree of engagement in disciplinary knowledge, risks, and long term implications. This guide offers a valuable framework for thinking about the ways genAI can significantly change the ways we teach writing, who is affected, and refusal as a sliding scale. Ultimately, they close with “it is a rational and principled choice to not use GenAI products unless and until we have determined that their benefits outweigh their costs” (Sano-Franchini et al.). This claim has prompted me to think further about consciously weighing some of those benefits. We can advocate for informed and responsible uses of genAI, while working to determine if the benefits outweigh the risks. GenAI tools can benefit NDSW by automating the parts of the writing process they struggle with, that do not “shortcut” the learning or writing process (Graham). Investigating the ways ADHD symptoms and executive dysfunctions materialize as obstacles within the writing processes is worth considering to avoid overlooking the benefits genAI may have for neurodiverse writers, while aiming to center accessibility in our teaching.