• What Happened When I Tried to Replace Myself with ChatGPT in My English Classroom

    Piers Gelly on a Semester-Long Dive into the AI Discourse

    My students call it “Chat,” a cute nickname they all seem to have agreed on at some point. They use it to make study guides, interpret essay prompts, and register for classes, turning it loose on the course catalog and asking it to propose a weekly schedule. They use it to make their writing sound more “professional,” including emails to professors like me, fearing that we will judge them for informal diction or other human errors.

    Article continues after advertisement
    Remove Ads

    Like many teachers at every level of education, I have spent the past two years trying to wrap my head around the question of generative AI in my English classroom. To my thinking, this is a question that ought to concern all people who like to read and write, not just teachers and their students. Today’s English students are tomorrow’s writers and readers of literature. If you enjoy thoughtful, consequential, human-generated writing—or hope for your own human writing to be read by a wide human audience—you should want young people to learn to read and write. College is not the only place where this can happen, of course, but large public universities like UVA, where I teach, are institutions that reliably turn tax dollars into new readers and writers, among other public services. I see it happen all the time.

    There are valid reasons why college students in particular might prefer that AI do their writing for them: most students are overcommitted; college is expensive, so they need good grades for a good return on their investment; and AI is everywhere, including the post-college workforce. There are also reasons I consider less valid (detailed in a despairing essay that went viral recently), which amount to opportunistic laziness: if you can get away with using AI, why not?

    It was this line of thinking that led me to conduct an experiment in my English classroom. I attempted the experiment in four sections of my class during the 2024-2025 academic year, with a total of 72 student writers. Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs. We would look at the evidence, and at the end of the semester, they would decide by vote whether A.I. could replace me.

    What could go wrong?

    Article continues after advertisement
    Remove Ads

    *

    Speaking about AI in the classroom, OpenAI CEO Sam Altman has described ChatGPT as “a calculator for words.” This analogy indicates the magnitude of change that ChatGPT is poised to bring about—imagine how radically math class must have changed when calculators became widely affordable—but it also indicates that change itself, even radical change, is not necessarily scary. Most AI skeptics would admit that math class survived the advent of the calculator.

    At the beginning of the semester, I asked my students to complete a baseline survey registering their agreement with several statements, including “It is unethical to use a calculator in a math class” and “It is unethical to use a generative AI service in an English class.”

    Rather than taking an “abstinence-only” approach to AI, I decided to put the central, existential question to them directly: was it still necessary or valuable to learn to write? The choice would be theirs.

    In my admittedly small sample, Altman’s analogy didn’t hold up. Calculators were uncontroversial: across my 72 students, one agreed that it was unethical to use a calculator, five chose Neutral, and the rest either disagreed or strongly disagreed. When it came to the notion that using AI in an English class was unethical, however, the results indicated an uncertainty that tilted toward skepticism: Neutral was the most popular choice (33 votes), followed by Agree (17).

    But plenty of people do things that they believe to be unethical. In my next question, I asked students to indicate, anonymously, whether they had previously used AI in for-credit writing assignments. They confirmed that they had used it for editing first drafts (22%), outlining (28%), interpreting prompts (38%), proofreading (50%), and brainstorming (56%), with smaller pockets using it for finding sources or writing first drafts.

    Article continues after advertisement
    Remove Ads

    Depending on your perspective, of course, some of these use cases might not be unethical, but the tension between these survey answers encouraged me: far from being nihilistic, take-no-prisoners cheaters, my students seemed genuinely confused. I could work with confused.

    *

    In the weeks that followed, I had my students complete a series of writing assignments with and without AI, so that we could compare the results.

    My students liked to hate on AI, and tended toward food-based metaphors in their critiques: AI prose was generally “flavorless” or “bland” compared to human writing. They began to notice its tendency to hallucinate quotes and sources, as well as its telltale signs, such as the weird prevalence of em-dashes, which my students never use, and sentences that always include exactly three examples. These tics quickly became running jokes, which made class fun: flexing their powers of discernment proved to be a form of entertainment. Without realizing it, my students had become close readers.

    During these conversations, my students expressed views that reaffirmed their initial survey choices, finding that AI wasn’t great for first drafts, but potentially useful in the pre- or post-writing stages of brainstorming and editing. I don’t want to overplay the significance of an experiment with only 72 subjects, but my sense of the current AI discourse is that my students’ views reflect broader assumptions about when AI is and isn’t ethical or effective.

    Article continues after advertisement
    Remove Ads

    It’s increasingly uncontroversial to use AI to brainstorm, and to affirm that you are doing so: just last week, the hosts of the New York Times’s tech podcast spoke enthusiastically about using AI to brainstorm for the podcast itself, including coming up with interview questions and summarizing and analyzing long documents, though of course you have to double-check AI’s work. One host compares AI chatbots to “a very smart assistant who has a dozen Ph.D.s but is also high on ketamine like 30 percent of the time.”

    Far from being nihilistic, take-no-prisoners cheaters, my students seemed genuinely confused. I could work with confused.

    The stoned-assistant image is meant to be funny, of course, but the podcast hosts’ companionable and mostly fun account of AI has some appeal to me as a writing teacher. There is a difference—or at least I think there is—between those two Times writers and my students, which is that the writers are writing for work, whereas my students are learning to write. But I try not to be a downer if I can avoid it.

    To that end, I had my students read an essay with a very long title, by two language faculty at MIT, that urges readers to “shed fears by deconstructing the persuasive and dystopian ‘human against machine’ narrative.” It is a tired old trope, they argue, that thinking machines will necessarily attempt to destroy us. We’re so invested in The Terminator and 2001: A Space Odyssey that we’re blind to stories that indicate the opposite. The authors point out that most people, even if they’re not chess fans, have heard of Deep Blue, the chess-playing machine that beat World Chess Champion Gary Kasparov in 1997; however, few people outside the chess world are familiar with Centaur Chess, which Kasparov invented shortly thereafter. In Centaur Chess, a computer faces off against a hybrid team that includes both a human and a computer—and “surprisingly, Human+AI Centaurs routinely beat today’s most sophisticated solo computers.” This isn’t true any more, and wasn’t true when the authors wrote it in 2024—by 2013, experts were declaring that Centaur teams could no longer beat the most advanced chess engines—but the invitation not to panic is appealing: “Once one accepts the merits of a ‘Human with Machine’ narrative,” the authors write, “the threat starts to disappear.”

    The authors propose that AI shouldn’t perform students’ work for them, but should serve as a sort of personalized coach to help students learn new skills. Like training wheels on a bike or water wings in the pool, AI can help most in scenarios where it brings about its own obsolescence. By accepting AI “as a partner,” the authors write, “we can engage in higher-order problem solving through human-machine collaboration. The same happened with the pocket calculator.”

    *

    Article continues after advertisement
    Remove Ads

    In the following class, I had my students consider a study, covered by an NPR story from 2024, that looked at the effects of AI on creative writing. The study’s authors recruited 293 amateur writers and asked each of them to write a short story on a given topic. Some writers got story outlines created by ChatGPT, while others were left to fend for themselves. The study authors then recruited 600 “regular” readers, who were unaffiliated with the publishing industry, to give numerical rankings for each story’s “stylistic characteristics, novelty, and usefulness” (i.e. “publishability”) as a workable, if incomplete, quantification of creativity.

    The study’s results indicated that stories using AI assistance were more creative than human-only stories: AI-assisted stories ranked 8% higher for novelty, and 9% higher for publishability, with extra benefits for “the worst writers,” as NPR’s Geoff Brumfiel puts it. “Those that were the least inherently creative,” one of the researchers says, a little more gently, “experienced the largest improvement.”

    They didn’t like hearing how their AI-generated submissions, in which they’d clearly felt some personal stake, amounted to a big bowl of bland, flavorless word salad.

    This would seem to affirm the virtues of human-machine collaboration, except that the study’s authors noted another effect: the more “creative,” AI-assisted stories turned out very similar to one another. When the writer Annalee Newitz tried to reproduce the study’s results, for example, they found that many AI ideas for an “adventure on the open seas” (one of the study’s test cases) revolved around the trope of finding treasure, with a high recurrence of the phrase “the real treasure was….”

    With these two findings in mind, one of the researchers summarizes the results as a “classic social dilemma,” where individuals benefit to the detriment of the group.

    I asked my students to consider this group for a moment, which contained all stories told by humans in any medium. Did it matter if their favorite books and TV shows all gradually came to resemble one another? They agreed that this obviously sounded bad, though some pointed out that this was already the case, even without AI assistance, due to the beloved formulas associated with popular genres.

    I asked them, then, whether it mattered if their own AI-assisted writing resembled other students’ work. They weren’t sure about this one, so I asked a few students to read aloud the titles of some essays they’d submitted that morning.

    For homework, I had asked them to use AI to propose a topic for the midterm essay, which addressed their relationship to technology. Most students had reported that the AI-generated essay topics were fine, even good. Some students said that they liked the AI’s topic more than their own human-generated topics. But the students hadn’t compared notes: only I had seen every single AI topic.

    Here are some of the essay topics I had them read aloud:

    Navigating the Digital Age: How Technology Shapes Our Social Lives, Learning, and Well-Being
    Navigating the Digital Age: A Personal Reflection on Technology
    Navigating the Digital Age: A Personal and Peer Perspective on Technology’s Role in Our Lives
    Navigating Connection: An Exploration of Personal Relationships with Technology
    From Connection to Disconnection: How Technology Shapes Our Social Lives
    From Connection to Distraction: How Technology Shapes Our Social and Academic Lives
    From Connection to Distraction: Navigating a Love-Hate Relationship with Technology
    Between Connection and Distraction: Navigating the Role of Technology in Our Lives

    I expected them to laugh, but they sat in silence. When they did finally speak, I am happy to say that it bothered them. They didn’t like hearing how their AI-generated submissions, in which they’d clearly felt some personal stake, amounted to a big bowl of bland, flavorless word salad.

    We depend on a calculator to produce identical results no matter who uses it, but identical results in a writing context are boring at best. At worst, these identical results amount to an insidious reproduction of the tropes and stereotypes present in the source text, as has been well documented by OpenAI’s own researchers. As Vauhini Vara argues in Searches: Selfhood in the Digital Age and Lilian-Yvonne Bertram demonstrates in A Black Story May Contain Sensitive Content, the training data have profound implications for thoughts and the shape they take. Far from being a comprehensive, value-neutral pool of all human writing, large language models like GPT have been trained on disproportionate quantities of Adventure, Fantasy, and Romance novels, as well as male-dominated internet spaces like Wikipedia and Reddit—the source, perhaps, of all those instances of “the real treasure was…,” which is of course a time-honored meme.

    *

    Then things got a little screwy.

    It started with Max’s essay. I had asked my students to use AI to re-draft the introduction to their midterm: had they really needed to spend all that time and effort writing it, or could they have prompted AI to generate a more or less submittable version of what they’d just submitted? We were playing a game where each student read aloud their two introductions and the rest of us guessed which was real.

    Max read us an account of a massive midnight snowball fight. He described the other students’ silhouettes “glowing in the dim, golden light” spilling between the columns of UVA’s neoclassical buildings. “The air smelled clean,” he wrote, “like frozen earth and woodsmoke.” It was a glorious, chaotic opportunity to forget “about my Chem midterm, about how much I missed my dog back home, about how I still felt like an outsider in most conversations here”—until a snowball had smashed into his face, giving him a wicked bloody nose. But even this wasn’t so bad:

    “Dude are you good?” a girl asked, stepping closer. She had snow in her hair and kind eyes.

    “Yeah,” I said, pinching my nose. “Battle wound.”

    We ended up sitting on the steps, me holding a wad of tissue to my face while she told me about the time she sprained her ankle during the same snowball fight last year. She promised me this meant I was officially part of UVA now.

    Surely this was the real Max. The story was lively, funny, and ended with this terrific meet cute—ask her out, Max! Any doubt evaporated when he read the next paragraph. The prose was serviceable, and plausibly human given what I’d previously read of Max’s work. But the repeated comparisons to trench warfare felt clunky to me (“Much like the iconic sequence in the movie 1917…”), and most importantly it wasn’t the Max of the first paragraph, the Max who’d met that girl. The students and I voted unanimously that the first paragraph was the human one.

    You’ve probably guessed where this is going. Max revealed, with a smile that didn’t quite conceal his dismay, that the girl did not exist, because the first paragraph had been written by ChatGPT.

    The classroom erupted in a hubbub of disbelief. I was as shocked as anyone. My ability to spot AI-generated text had until now proven so reliable that it wasn’t even a point of conscious pride, just another flavor of the disappointment I feel when I start reading bad writing.

    When we talked about it, we reflected on the crucial efficacy of the romance plotline. More than any single line of prose, it was the girl that had taken us in. She was so beautiful in her vagueness: the snow flecking her hair of unspecified color and texture, the frisson of erotic worldliness that comes from her being older than our narrator, and of course her “kind eyes.” Perhaps we were so deeply programmed by the rom-coms we’d watched that we’d mistaken a rom-com for reality.

    *

    In conversations about AI and education, it’s less common to hear about instructors using AI for writing lectures, designing assignments, or grading. But even if AI obviously introduces bias and error, it poses advantages for teachers that don’t apply to students: namely, that teachers are always operating at scale. If I assign 54 students five double-spaced page of writing, as I often do, I’ve assigned myself 270 pages to grade; most semesters, I easily top 1000 pages of grading. You could reasonably tell me to suck it up and stop complaining, because grading is part of my job, but the counterargument—made by school districts including Miami-Dade County Public Schools, the third-largest district in the country—is that the speed and efficiency of AI grading is worth exploring, because it frees up teachers’ bandwidth for individualized, in-person help. Plus, students seem to like the quick feedback. “They could rewrite the paragraphs right away,” reported one 12th-grade literature teacher who used Google’s chatbot to grade an essay on Oedipus Rex, “instead of having to wait a day or two before they would get their essays back from me.” (In this case, this teacher also graded the essays herself.)

    Some students have mixed feelings about the idea of receiving AI instruction or feedback—one student at Northeastern petitioned unsuccessfully for a tuition refund on the basis that her instructor had used AI—but the upsides for school districts and colleges are clear. I am not very expensive in the grand scheme of things, but I am far costlier than licensing an AI writing tutor. Frankly, in the era of DOGE, I’m surprised we haven’t heard more about replacing the left-leaning cadres of public university faculty with cost-efficient, “ideologically diverse” chatbots.

    Call it an attempt to inoculate my students, or masochism, or plain old curiosity, but I considered this to be a key question for my students to answer: even if they believed that college-level writing instruction should occur, did it follow that such instruction must include human instructors?

    The next phase of my experiment was to see if AI could replace me in the specific area of grading. I gave my standard commentary on the students’ midterm essays, then asked them to ask the AI of their choice to give them feedback. Then they had to revise their essay using whichever advice they preferred, and tell me why.

    I didn’t realize how irreplaceable I’d believed myself, how like a John Henry of the networked Humanities, until my students shared their findings. Yes, the majority preferred my feedback—it was noted that the AI models demonstrated an unhelpful fixation on “improving transitions,” whatever that means—but even my strongest advocates noted that their AI tutors often gave advice similar to mine, and faster. And plenty of students simply found AI’s advice more helpful. My student Cruz got great results, according to him, by feeding my annotations to ChatGPT and asking how best to address my questions and concerns. This, truly, was Centaur Chess: a cyborging of my consciousness (and a donation of my intellectual output to OpenAI’s training data) that had the dual advantages of my human discernment and AI’s raw power and speed.

    My gold standard for humanities education comes from my own undergraduate experience in a small major devoted to “Western thought,” wherein eighteen other students and I followed a three-year curriculum of primary texts from Homer through Heidegger. In each of these seminars, we had two instructors instead of one, who came from different disciplines: our Medieval colloquium, for example, featured a historian of early modern Rome alongside a softspoken Platonist. Nothing prepared me better for adult intellectual life than getting two sets of contradictory feedback on every essay I wrote for those classes, because I had to decide, over and over, what to make of these expressions of my teachers’ authority. Yes, they knew more than I did, but they couldn’t both be right. Their disagreements opened up a silence, however brief at first, in which I could hear the thing I now call my voice.

    Was it possible, then, that my students and I had just stumbled upon way to achieve the same goal? If two heads are better than one, was AI simply another head, or two, or three, and at a fraction of what my head cost?

    *

    If you accept both of these use cases—if you believe that students and faculty alike can and should use AI—you quickly encounter a scenario that most people would find logically abhorrent: teachers using AI to evaluate and grade AI-generated “student” writing.

    It isn’t always useful to push an argument to the wall, because we live in a world of compromises, of unrealized hopes and unfulfilled threats. But in this case, it’s a helpful thought exercise. Take two or three steps forward to a world in which AI models are essentially grading their own text output, and you’ll reach the logical terminus of a value system that precedes ChatGPT, and which ChatGPT forces us to either condemn or endorse: that school is reducible to workplace preparedness; that work is reducible to the pursuit of maximum throughput at minimum cost; that there is no difference between school and work; that value is measurable in all contexts; that time is money; that anyone who says otherwise is a scammer or a sucker.

    If two heads are better than one, was A.I. simply another head, or two, or three, and at a fraction of what my head cost?

    I’ll admit that this sounds like tech panic, and that tech panic rarely ages well. Back in 1998, for example, faculty and academic officials panicked about the rise of the internet, expressing concerns that seem both quaint and prescient. “Let’s say everyone in the class has a networked computer,” the vice provost of Clemson told the Times. “They can pull down information electronically. Who’s going to watch over every student’s shoulder to see what they’re doing?”

    The answer to that question is: nobody. Before ChatGPT, students could easily use the internet to cheat if they wanted to, either in low-cost ways such as Course Hero, which allows students to share essays and exam answers across semesters, or more boutique forms of cheating such as hiring others to write their essays for them. Friends of mine have been compensated generously for providing the latter service, which existed long before the internet; forty years ago, an undergraduate named David Foster Wallace had a side hustle at Amherst writing “term papers for hire.”

    Perhaps ChatGPT has simply democratized this venerable tradition of cheating, thereby reducing the moral trespass we indicate when we use the word “cheating.” As I told my students, when spell-check software first became available, a Washington Post op-ed predicted bitterly that “The careless, the inept, the spelling disabled will be able to survive in a world of words by relying on computers to conceal their own weaknesses.” This is easy to dismiss as elitist and mean—“But how many of you,” I asked my students, “can confidently spell the word embarrassed?”

    My question produced a murmur of embarrassment, as I expected it would, but my point wasn’t that they should feel ashamed. My point was that most people don’t mourn this loss, because spell check wasn’t a hill anyone had wished to die on, because it’s exhausting to give a shit. My point wasn’t that they should give a shit, only that they could. The choice was theirs, as always.

    *

    Then it was time to vote: in the age of generative AI, did my students believe we still needed courses like mine and instructors like me? For our last class of the semester, I had them write essays in which they argued yes or no.

    The vote was decisive. Out of 72 students, 68 voted to affirm that we do need me. But in the discussion that followed, and the essays that I read over the following days, the picture proved more complex.

    Many students described the intellectual journey I’d hoped they would have. “I would once have argued no,” Hannah wrote. Before my class, she’d used AI often because “it is what I believed made my writing better.” But going forward, she and many other students predicted, as Andrew put it, that “I honestly think I will be using AI less than I have ever used it before.”

    Not all students agreed. Max, whose AI essay about the snowball fight had fooled the entire class, cited the “Centaur Chess” argument, as well as the UVA College of Arts & Science’s website. First-year writing courses, he wrote, “are supposed to focus on the ‘struggles, possibilities, and accomplishments’ of student writing,” and based on what he’d seen in my course, “AI models are now part of that process.” At the start of the semester, he’d disagreed with the calculator analogy. “Now,” he wrote, “I’m not so sure.”

    Max voted to affirm that the course was still worthwhile. But Nathan, one of the four students who voted that my course wasn’t necessary, took Max’s argument a step further. While some students from “different childhoods and levels of education” might need help writing at the college level, Nathan explained that he’d had “an excellent education up to this point,” for which reason he took the “difficult and dangerous” view that “I do not believe that students of The University of Virginia, a top 3 public school in the country, need a first-year writing course such as this one.” For him, the question wasn’t whether his writing could be improved, but whether there existed a point beyond which improvement yielded diminishing returns. You could argue, he conceded, “that AI will never be able to write as well as a human but again at what point is AI simply ‘good enough’. And while the argument ‘good enough’ may sound like a bad thing, when put in a real-world scenario when is ‘good enough’ not enough? Never.”

    Sam—who wrote that “I’ve never had such a positive experience in an English class”—made a similar argument: “This class helped me to find my voice,” he wrote, “but I would still have been a capable writer without this class,” adding that “I don’t think it’s necessary [emphasis his] to go beyond that.”

    I don’t want to overstate the importance of these arguments against the course. But for my own reasons (self-inquiry? self-flagellation?) I am drawn to them. I suppose I feel obliged to correct for the fact that some students might have voted yes simply to spare my feelings; I admire Nathan’s and Sam’s bravery for saying all this to my face, as it were. What they were saying, in their own ways, is that writing is a craft, and there may be a point at which we’ve mastered it, at least for our own purposes. And if you have access to an electric drill, why would you insist on using a screwdriver?

    It was an argument that had occurred to me too. In the final essay prompt, I’d invited my students to compare my course to learning “to start a fire with flint and tinder in the age of matches and propane lighters”: was this analogy accurate?

    Of the four students who argued that the course wasn’t necessary, another took up this analogy directly. “Reflecting on the fact that 3 credits at UVA costs me $5000+ and 2100+ minutes,” Drew wrote, “I do not believe I grew enough through this course for it to be worth it.” Having noticed only “incremental improvements in [his] writing and thinking,” he concluded that “I would rather have spent this large sum of money and time on a course that interests me and teaches me about my career aspirations, like the finances of real estate. If I need to learn to write, I believe AI can serve me well for MY purpose at a fraction of the cost” [emphasis his]. He acknowledged that my course “can be useful for some students if they want to go into journalism,” but concluded that requiring a writing course for all students “is similar to insisting someone to master starting a fire with flint in an era of propane lighters.”

    Other students disagreed with my analogy. “The analogy is flawed,” Dishi argued, “for unlike fires, all writing is not created equal. Fires, regardless of their size or method of ignition, serve similar primary purposes—providing heat or light. Functionally, the end results are always the same,” unlike writing.

    What they were saying, in their own ways, is that writing is a craft, and there may be a point at which we’ve mastered it, at least for our own purposes. And if you have access to an electric drill, why would you insist on using a screwdriver?

    But some students argued for the course’s value because they saw the analogy as appropriate. Carina, a ROTC student who often attended class in full camo, wrote that “there is a reason people still learn to build a fire that way, in case of emergency with no resources.”

    “Just the other day,” Josie wrote, “the power went out in my dorm. I had a geography paper to write, and I had to do it by hand, due to the fact that I had forgotten to charge my computer.” She admitted that this was an unusual scenario, but added that she’d come to realize, via this experience of literal powerlessness, that writing

    is a gift that should not be taken for granted. While taking this class and hearing some people say that AI is super beneficial and should be allowed to be used to complete assignments, it made me slightly sad. I feel like we are losing something that is of invaluable importance.

    *

    In my admittedly small sample of 72 students, I noticed that the students whose essays expressed the strongest doubts about the course, whether or not they voted no, were all men. I didn’t have the opportunity to ask them about this, but I can speculate along identitarian lines as to why my brethren felt this way. Perhaps we are so used to having our say that we don’t value our voices all that much. Perhaps ChatGPT’s male-dominant training data causes it to produce language that is less likely to cause us lexical dysmorphia when we compare it to our own.

    Three male students made the surprising decision to use AI to write essays arguing that my course, and the practice of human writing, was valuable—using AI to argue, as Donovan did, that “a semester of scribbling, revising, and occasionally tearing my hair out has shown me that the messy human part still matters.”

    I know they used AI because I asked and they confirmed it. I’d read many thousands of their words by this point, and these essays didn’t sound like them. In the sentence above, for example, you can see an example of AI’s telltale three-part lists, as well as the cliché phrase “tearing my hair out,” which I knew Donovan would never write, because he is not the author of the comic strip Cathy.

    The semester was over when I asked, so they didn’t have to reply to me; it is to their credit that they did.

    “I figured it would be funny and ironic,” Donovan explained, to feature AI “harshly exclaiming how AI cannot write certain things.” When I pointed out that the joke he intended would have required an “aha” moment where he told the reader that the text was AI-generated, Donovan said he’d meant to tell me, but “it completely slipped my mind.”

    Hector explained that he’d used AI to revise his anti-AI essay, then “kept the part that interested me.” When I’d read his essay, my gut instinct was that he’d actually done the opposite: drafted the essay using AI, then added a few sentences in his own words, which were identifiable by their comma splices, and by the distinctive Hector-isms I’d come to know over the course of the semester, some of which stemmed from the fact that English is his second language. But after Max’s snowball fight essay, I wasn’t sure I trusted my gut. I let it go.

    In the present technological moment, this may be the only choice we have, students and teachers alike: whether or not to fall back on trust

    Misha used AI to write an essay containing this sentence: “At its core, writing is a tool for thinking, and without spaces that train us to sharpen that tool, it becomes dangerously easy to accept surface level answers as good enough.”

    I decided to press the issue with Misha. When he explained that “with multiple finals coming up I wanted to maximize my time,” I pointed out that “this would seem to be an ideal scenario in which to argue that AI can replace (or supplement) human writing,” and asked why he hadn’t simply said so. I was curious whether he’d meant to spare my feelings, to tell me what he thought I wanted to hear, but he didn’t reply.

    *

    These three were in the minority, I think. I caught them using AI because they disguised it poorly, and other students might have been smarter about covering it up. But when it came to the other students, I took their word for it. In the present technological moment, this may be the only choice we have, students and teachers alike: whether or not to fall back on trust.

    As Misha’s essay indicates, writing about “the power of writing” contains its own stock phrases and brainless clichés. But I also caught glimpses, or I believe I did, of the hauntingly simple power of words. Yes, it’s what I wanted to hear, and I heard it: Writing, wrote Zoey, “is a way to express something that you cannot verbally say out loud,” which made it “a subject as rigorous as science. Everyone can speak, but not everyone can write.”

    Looking back at all her written work from that semester, Adriana saw a writer who “has become more in tune with herself, her surroundings, and the effect that her presence has on her environment. What surprises me the most about what I see is that she is free.”

    Before my class, Cam admitted, “I used chatGPT on almost every, if not all of my assignments.” But our semester of experiments, she wrote, had “allowed me to witness the negative effects using a tool like AI was doing to my brain.” My class, she wrote, was

    the first time in months that I had written something without asking ChatGPT to help me along the process. I quickly realized I had forgotten how to do that. I did not remember how to edit my own work, write without having it jumpstart my thoughts, or not having something to make my writing “sound better.” I came to the realization that ChatGPT has become my crutch rather than a tool.

    If ChatGPT were to read Cam’s essay, I doubt it would pause at this line. But her words have lingered with me because Cam spent the last month of the semester on crutches, so I don’t think she used the word crutch lightly. I have never had to use crutches myself, but I saw Cam struggling to walk, and saw her accepting help from Sam (“I would still have been a capable writer without this class…”), who assisted her in getting to the elevator every day after class. Sam and Cam met in my class, but during the semester, they discovered that they attended the same church; by the end of the semester, he was planning to accompany her on a group trip to the beach.

    A crutch is a tool, of course, but much of the time it’s one that brings about its own obsolescence, like me. I have no doubt that reading and writing will survive without the help of college, but at its best, college offers students the opportunity to learn these skills with, and from, one another. Maybe the real treasure is the friends they make along the way.

    Piers Gelly
    Piers Gelly
    Piers Gelly lives in Charlottesville, Virginia, where he is an assistant professor of English at UVA. His recent work has appeared in The Point, The Dublin Review, and n+1. He is at work on a novel and a collection of essays.





    More Story
    Small Book, Big Ideas: Harold and the Purple Crayon and the Art of Imagination Crockett Johnson’s Harold and the Purple Crayon (1955) was Prince’s favorite childhood book. It is why Prince played...
  • We Need Your Help:

    Become a Lit Hub Supporting Member

    Lit Hub has always brought you the best of the book world for free—no paywall. But our future relies on you. In return for your contribution, you'll get an ad-free site experience, editors' picks, and our Joan Didion tote bag. Most importantly, you'll keep independent book coverage alive and thriving.