English 110 Versus ChatGPT

Finding a middle ground with AI usage in the composition classroom.

Months of doomscrolling articles didn’t prepare me for the gut punch I felt when the first students submitted AI-written essays. But there it was on Blackboard, the thing I’d feared: five separate ChatGPT submissions. 

I am an adjunct lecturer in the English Department at the City College of New York, where I teach English 110, the required composition course for incoming students. I believe in linguistic freedom, and I teach my mostly multilingual students that language is both a tool and an identity, and, even as we learn to masterfully wield this tool, we should never sacrifice our identity in service of it. I teach students to be critical of Standard American English, to understand how the focus on academic or white English reinforces existing power structures. We read June Jordan’s “Nobody Mean More to Me Than You and the Future Life of Willie Jordan,” where she lays out rules for Black English, and my students do an assignment where they create the rules of their own English such as, “Stutter every couple sentences.” I never take points off for grammar and syntax, and before ChatGPT, my attempt at inclusivity in the classroom was working pretty well.  But I was naïve when it came to AI. I thought, because we study these things and because I try to give respectful amounts of homework, my students wouldn’t resort to AI. I was wrong.

I know AI is plagiarism—an academic crime with severe penalties for these students with part-time jobs, family obligations, and the looming misery of COVID-19 still weighing on them. These are students who go to hospitals to translate for family members, who write emails for their parents, who have lost out on job opportunities because of the kind of language they use in cover letters. They have experienced linguistic discrimination, and now, for the first time, writing can be easier for them with AI. It is, one could argue, making things more equitable.

When I received the submissions, I struggled with how to respond. If I was going to accuse them, I decided, I wanted to prove it. I certainly wasn’t comfortable telling any of my 30 students, who I respect, that their assignment’s “comma usage and vocabulary” was too advanced to be their own. And who, especially adjuncts, has the time and resources to run each student’s work through cumbersome software? Personally, I think there’s something questionable about using AI to detect AI.

And, more than that, I don’t want to be in the mindset that my students are possibly cheating. At the beginning of each semester, I make a promise to myself that, no matter what, I will give my students the benefit of the doubt. If a student doesn’t show up, I reach out with the assumption that some serious barrier has kept them from attending. My frequent phrase in emails is, “I recognize that real life doesn’t always conform to the academic schedule.” But I was surprised by my own feelings of anger and betrayal about AI usage—wasn’t I one of the fun and interesting teachers who gave cool worthwhile assignments?

I went on a walk around our campus in Harlem and thought. I sat in my windowless shared office and skimmed online articles from respected academics. I messaged my fellow adjuncts for opinions (one said, “I give one warning and then I fail them.”) I looked through open resource assignments that invited instructors to utilize and teach with AI. But, for all my sleuthing and pondering, I couldn’t find a response or policy that felt right.

There was only one group left to consult.

On Wednesday’s class, after our reading quiz, I projected the following words onto the screen:

  • Is AI/ChatGPT the great equalizer that we need in education? Or does AI rob us of education and individuality?
  • How should we deal with AI in this class?
  • In groups, make an argument answering these questions. You will have 20 minutes to prepare the argument (please make notes and prepare 3 points to support your argument).

They all stared at me, more alert than they’d been all semester. The room was too hot, as always, and I’d propped the door open with a trashcan from the hall. But even in the heat, they were awake. Maybe it was the topic, maybe it was my tone, but they were paying attention. Some of them looked afraid. I kept my curious, cheery demeanor.  

I divided them into five groups, and the room exploded into conversation. I sat at my desk and tried to give them privacy.

The students finished their arguments. They rotated the uncomfortable wooden chairs forward, ready to share.

I wrote their points on the board. Here is some of what they listed:

CON: Don’t allow AI in the classroom

  • AI makes you lazier—don’t do work
  • Lose essay writing knowledge
  • Bland essay, no variety
  • Loss of voice and original opinion
  • Removes purpose of writing (why are we even doing this?)
  • Technically plagiarism
  • Erases other Englishes (like Black English)
  • Teacher feedback is useless
  • Removes research and possibility of changing mind (through research)
  • AI thinks for you, and things that think for you can make you do bad things (my student called this the “evil genius” argument)
  • AI TAKES YOUR VOICE

PRO: Allow AI in the classroom

  • AI can help with revision (it’s almost like having a writing center in your phone)
  • Life can make it impossible to get through school, but we need it to advance in life
  • When life is overwhelming, you have to prioritize, and this class is a requirement, not a passion. AI can help us pass classes and get good jobs.

As I tried to keep up with notetaking on the board, I realized that we teachers were getting sucked into a moral panic. We thought students would take the easy road for the wrong reasons, but as my students spoke, I realized that we were on the same side. One student talked at length about how he worried that, as AI evolved, it would engage in appropriation if it mimicked the English of marginalized groups, particularly African American Vernacular English (he eventually wrote his research paper on this subject). “Either we get erased, or we get appropriated,” he said.  

But one student argued passionately that AI could help him revise his sentences and turn in polished work. He said AI allowed him to get actual useful feedback from instructors—instead of receiving a returned essay marked up with grammar or syntax mistakes, his instructors were making notes and asking questions about his ideas. It improved, not diminished, his learning. He also stated that AI was a perfect starting point for research: he could type in a question and get a few key terms or a brief history of a topic.

After we had filled the board, and the chatter had died down. I turned off the projector, returned to my desk, and asked them this final question: “What would you do if you were in my situation?”

They stared at me again.

I told them that I knew people in the class were using AI, but that I genuinely didn’t know what to do. I assured them that, whatever they decided as a group, I would honor as long as it didn’t violate the Academic Integrity Policy.

Slowly, over the next 15 minutes, with the sun streaming through the windows, the occasional footstep echoing down the hall, we drafted an agreement of how to handle suspected ChatGPT cases in our classroom. I wrote the following:

I promise to:

  • Give students the benefit of the doubt—have a conversation
  • Don’t make a blanket rule—go case-by-case and give the opportunity for makeups
  • Appreciate the tool—it allows students to practice crafting language and grammar without speaking for them
  • Allow students to use ChatGPT in course emails and non-graded correspondence
  • Learn from students. Because it isn’t my area of expertise, I will allow them the chance to explain before I forbid any particular thing

We (the students) promise to:

  • Be honest and upfront about our usage
  • Trust that we will get the opportunity to make up work
  • Ask for extensions or support if we need it

Our class time wrapped up, and I gathered my papers, unplugged my computer, and went to straighten the chairs. Some of them said goodbye and thank you, others were already half out the door five minutes before the end of class.

I don’t know if this is the answer to the conundrum of AI in the classroom. All I can say, in the 70+ written submissions I’ve received since drafting this policy with them, it has continued to be a conversation. I had one student come to me after turning in a paper and admit to using AI to write it; in that case, I asked him to re-write the paper for a different intended audience following the rules of his own English. He ended up “translating” the paper so that it was written as a long letter to his best friend—and it was a well-argued and convincing piece. I will say that I have gotten significantly more requests for extensions. I’m okay with that and always grant them, even though it makes my calendar into a wild thing. I urge my students that their voice, their words, are more important and interesting than anything AI can create, but they already know that. AI use is more about time than it is about curiosity or desire to learn—they turn to it because they are overwhelmed, not because they are lazy. 

Soon, we need to grapple with the question of the essay, and why we use it as the definitive demonstration of knowledge. But for now, I think the best way to handle AI is to give students the invitation to think it through and provide context surrounding the politics of language. If we instructors respect our students, why not do what we’ve always done: educate them and trust them?

More on ChatGPT in the classroom:
Read “Alternative Strategies for Artificial Intelligence in the Writing Classroom: An Educator Asks ChatGPT How to Use the Online Chatbot in Writing Classrooms” by Abriana Jetté.

Featured image created using AI image generator Craiyon.com.

India Choquette

India Choquette (she/her) is an adjunct instructor at the City College of New York where she is a graduate student in the Creative Writing MFA program. In 2023, she was awarded the “Teacher-Writer Award” by the English Department, and she has been an editor at Promethean, the college’s literary journal, since 2021. Outside of CCNY, she is a mentor with Girls Write Now, a NYC-based nonprofit that uplifts the voices of young writers. She was a Katharine Bakeless Nason Scholar at the Bread Loaf Environmental Writers Conference, and her fiction has been published in Foglifter Journal. She lives with her spouse and two cats in a plant-filled apartment. She is an editorial fellow at Teachers & Writers Magazine.