This fall is the first in nearly 20 years that I am not returning to the classroom. For most of my career, I taught writing, literature, and language, primarily to university students. I quit, in large part, because of large language models (LLMs) like ChatGPT.
Virtually all experienced scholars know that writing, as historian Lynn Hunt has argued, is “not the transcription of thoughts already consciously present in [the writer’s] mind.” Rather, writing is a process closely tied to thinking. In graduate school, I spent months trying to fit pieces of my dissertation together in my mind and eventually found I could solve the puzzle only through writing. Writing is hard work. It is sometimes frightening. With the easy temptation of AI, many—possibly most—of my students were no longer willing to push through discomfort.
In my most recent job, I taught academic writing to doctoral students at a technical college. My graduate students, many of whom were computer scientists, understood the mechanisms of generative AI better than I do. They recognized LLMs as unreliable research tools that hallucinate and invent citations. They acknowledged the environmental impact and ethical problems of the technology. They knew that models are trained on existing data and therefore cannot produce novel research. However, that knowledge did not stop my students from relying heavily on generative AI. Several students admitted to drafting their research in note form and asking ChatGPT to write their articles.
Read More: Regulating AI Is Easier Than You Think
As an experienced teacher, I am familiar with pedagogical best practices. I scaffolded assignments. I researched ways to incorporate generative AI in my lesson plans, and I designed activities to draw attention to its limitations. I reminded students that ChatGPT may alter the meaning of a text when prompted to revise, that it can yield biased and inaccurate information, that it does not generate stylistically strong writing and, for those grade-oriented students, that it does not result in A-level work. It did not matter. The students still used it.
In one activity, my students drafted a paragraph in class, fed their work to ChatGPT with a revision prompt, and then compared the output with their original writing. However, these types of comparative analyses failed because most of my students were not developed enough as writers to analyze the subtleties of meaning or evaluate style. “It makes my writing look fancy,” one PhD student protested when I pointed to weaknesses in AI-revised text.
My students also relied heavily on AI-powered paraphrasing tools such as Quillbot. Paraphrasing well, like drafting original research, is a process of deepening understanding. Recent high-profile examples of “duplicative language” are a reminder that paraphrasing is hard work. It is not surprising, then, that many students are tempted by AI-powered paraphrasing tools. These technologies, however, often result in inconsistent writing style, do not always help students avoid plagiarism, and allow the writer to gloss over understanding. Online paraphrasing tools are useful only when students have already developed a deep knowledge of the craft of writing.
Read More: The 100 Most Influential People in AI
Students who outsource their writing to AI lose an opportunity to think more deeply about their research. In a recent article on art and generative AI, author Ted Chiang put it this way: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” Chiang also notes that the hundreds of small choices we make as writers are just as important as the initial conception. Chiang is a writer of fiction, but the logic applies equally to scholarly writing. Decisions regarding syntax, vocabulary, and other elements of style imbue a text with meaning nearly as much as the underlying research.
Generative AI is, in some ways, a democratizing tool. Many of my students were non-native speakers of English. Their writing frequently contained grammatical errors. Generative AI is effective at correcting grammar. However, the technology often changes vocabulary and alters meaning even when the only prompt is “fix the grammar.” My students lacked the skills to identify and correct subtle shifts in meaning. I could not convince them of the need for stylistic consistency or the need to develop voices as research writers.
The problem was not recognizing AI-generated or AI-revised text. At the start of every semester, I had students write in class. With that baseline sample as a point of comparison, it was easy for me to distinguish between my students’ writing and text generated by ChatGPT. I am also familiar with AI detectors, which purport to indicate whether something has been generated by AI. These detectors, however, are faulty. AI-assisted writing is easy to identify but hard to prove.
As a result, I found myself spending many hours grading writing that I knew was generated by AI. I noted where arguments were unsound. I pointed to weaknesses such as stylistic quirks that I knew to be common to ChatGPT (I noticed a sudden surge of phrases such as “delves into”). That is, I found myself spending more time giving feedback to AI than to my students.
So I quit.
The best educators will adapt to AI. In some ways, the changes will be positive. Teachers must move away from mechanical activities or assigning simple summaries. They will find ways to encourage students to think critically and learn that writing is a way of generating ideas, revealing contradictions, and clarifying methodologies.
However, those lessons require that students be willing to sit with the temporary discomfort of not knowing. Students must learn to move forward with faith in their own cognitive abilities as they write and revise their way into clarity. With few exceptions, my students were not willing to enter those uncomfortable spaces or remain there long enough to discover the revelatory power of writing.
More Must-Reads from TIME
- Caitlin Clark Is TIME's 2024 Athlete of the Year
- Where Trump 2.0 Will Differ From 1.0
- Is Intermittent Fasting Good or Bad for You?
- The 100 Must-Read Books of 2024
- Column: If Optimism Feels Ridiculous Now, Try Hope
- The Future of Climate Action Is Trade Policy
- FX’s Say Nothing Is the Must-Watch Political Thriller of 2024
- Merle Bombardieri Is Helping People Make the Baby Decision
Contact us at letters@time.com