My fingers trembling, I typed a response to my professor. He had just accused me of using artificial intelligence for my recently submitted essay and changed my grade to a zero. If I didn’t get a good grade on this essay, I would fail the course, my GPA would lower significantly, and I would lose an academic scholarship to the university I was transferring to in the fall. I hadn’t used AI on the paper, I had never used it on any assignment, yet his supposedly reliable artificial intelligence detector had flagged my writing as “0% Human Generated.”
The unfortunate truth about this scenario is that my professor was right to pursue action against someone he assumed was engaging in academic dishonesty. In fact, educators have a responsibility to enforce the AI policies they clearly outline at the beginning of every course. The problem is that AI detectors are not advanced enough to accurately recognize artificial works. Like me, many other students have been falsely accused of using AI and many educators have been in the awkward spot of falsely accusing a student. The inaccuracy of AI detectors is eroding the relationships between students and professors by sowing seeds of distrust in the classroom. I’ve seen it with teachers struggling to deal with academic dishonesty; I’ve seen it as a student having to defend my original work against a painful accusation; and I’ve seen the discomfort and fear AI brings both students and teachers.
My first experience with artificial intelligence occurred when I was working on a paper for an American literature class. As I was brainstorming, my brother approached me and asked what I was writing about. I told him my thesis and watched with skepticism as he plugged the prompt into the chat box of ChatGPT. To my surprise, the AI began generating a five-page paper. My mind was blown. A paper that would have taken me more than two hours to write had appeared in a matter of seconds. And the thing was, it wasn’t that bad. Sure, it was a little plain and monotonous at times, but it made some good points, points I hadn’t even thought to make in my argument. Suddenly I knew why I had been hearing so much about this new generative AI machine.
For a lot of fellow students I knew, using artificial intelligence was an easy decision. When it felt too overwhelming to write a paper amongst all the other homework assignments we were facing, they simply plugged a prompt into ChatGPT and watched their essays materialize in front of them. Kids at my youth group shared tips about sending AI essays through “humanizing” AI models that were designed to mask artificial intelligence from detectors. My brother tried to show me how to change certain words to make it appear man-made. A friend told me that using ChatGPT to generate an outline of her paper wasn’t cheating because it was still her words that were writing it. A co-worker argued that he was justified in using AI for general education assignments because the classes were redundant to his education. All the tools to create a convincing, completely artificial essay were just clicks away on free websites all over the internet. I felt like the last fortress still standing in a land that had been conquered by academic dishonesty.
While studying one night, a friend told me he was putting his paper through ChatGPT. I looked up, puzzled. He knew how I felt about artificial intelligence in the classroom.
“My professor said we should use ChatGPT to fix any spelling or grammar issue,” he clarified.
I was relieved that he wasn’t resorting to academic dishonesty, but I was still confused. I asked him why his professor told the class to do that. He shrugged. “He just said to tell ChatGPT not to change any of the words, just to fix grammar and spelling mistakes.”
Interestingly enough, this professor had encouraged his students to use artificial intelligence as a tool to improve the papers they had already created. While I understood that this professor was trying to bridge the artificial intelligence gap by encouraging AI as a glorified spellchecker, using ChatGPT to correct syntax and spelling errors lessens students’ motivation to learn those skills. This exchange made me wonder how AI should be approached in the classroom. Is this editing assistance the right solution? If it isn’t, what is?
I personally think AI has no place in the classroom. Using means other than given texts and the student’s own analysis skills circumvents learning opportunities. If we’re too focused on sending our AI-generated work through humanizer after humanizer to make sure it doesn’t get flagged, we aren’t developing our ability to interpret and draw conclusions from a text. Using artificial intelligence for homework deprives students of the opportunity to develop critical reading and writing skills––skills that will allow them to thrive both in higher education and in the workforce.
This year, I’m taking a first-year English class at my university. My English professor has been hit with an interesting problem: this new cohort of incoming students, who were eighth graders during the COVID-19 shutdown, have not been adequately prepared for a close reading essay. My professor noted how close reading and analyzing skills are usually taught in eighth grade and freshman year of high school, the time we were stuck learning through a computer. When school was back in person, teachers pushed ahead as if we had fully mastered these skills during the shutdown, which not all of us had. While this can provide some context for the reliance on AI, when students choose to falsify their work by using generative AI, they are forfeiting an opportunity to grow in a skill that will be useful for their entire lives. Education is about preparing students for the future. No matter how advanced artificial intelligence becomes, knowing how to use it won’t supersede the ability to develop our own critical thinking skills. But that means that teachers might need to slow down and teach skills that we didn’t get the chance to develop during online schooling.
I choose not to use artificial intelligence in my work not because I’m scared of engaging with new technology, but because I want to grow and learn as a student. I’m attending university so I can develop strong skills both in my major and for real-world experiences. This present education is my future, so I’m not planning on wasting it on ChatGPT. While I do try to get good grades, I’m mostly trying to learn. Using artificial intelligence on assignments is not learning. One of my professors is having an oral final exam at the end of the year because of how much generative AI has corrupted the education world. That he is in a place where he can no longer trust his students not to use artificial intelligence is a sad reality. I feel it is my duty as a student to obey the syllabus when it says not to use generative AI. I’m not trying to make professors like me more than other students, I’m trying to learn from them.
When I sent my email to the professor about being falsely accused of using AI, I knew I had to respect whatever decision he made. It was up to him to either believe his AI detectors or me, his student. In that moment, I was filled with resignation. I never wanted to have to defend my written work as my own, but that was the world we lived in. As someone who tries to avoid confrontation, it was really hard for me to write that email. I didn’t want to suggest that my professor was wrong, but I also couldn’t afford to lose those points at the whim of an AI detector. Thankfully, after reviewing other pieces I had submitted in his class, my professor chose to believe me and restored the original grade for the assignment. He apologized for his accusation, and we were able to mend the chasm AI had dug between us and discuss artificial intelligence together in a respectful manner through a series of emails. I have a lot of respect for him as an educator for enforcing his academic dishonesty policy, but also for choosing to treat me as an adult and believe my story, even if it meant he had to be wrong in the process.
As students, we need help. We need educators who can walk beside us, challenge us when we aren’t following the rules, but also who are willing to admit that they could be wrong, especially when they are relying on technology, too. We need educators who can treat us like young adults and adapt to the world we are in. We need educators who are willing to push us and themselves outside of our comfort zones so we can actually learn how to address this new technology together. Sometimes it feels like we have to fight our instructors, and that is not helpful for our education. Our futures are hanging in the balance, and we need instructors who are willing to guide us forward, not hold us back.
Read more about AI in the magazine:
• “Why Not Use AI to Do Your Homework?: Writing Is a Form of Thinking” by Philip Nel
• “AI in the Creative Writing Classroom: A Conversation on Teaching Writing in a ChatGPT World” by Matthew Burgess, India Choquette, Abriana Jetté, and Darius Phelps
• “English 110 Versus ChatGPT: Finding a Middle Ground with AI Usage in the Composition Classroom” by India Choquette
• “Alternative Strategies for Artificial Intelligence in the Writing Classroom: An Educator Asks ChatGPT How to Use the Online Chatbot in Writing Classrooms” by Abriana Jetté
Featured photo by cottonbro studio.
Audrey Grice
Audrey Grice is a current undergraduate student at George Fox University studying English with a writing focus and Cinematic Arts. Once she graduates, she hopes to continue her education through a master's degree and pursue a career in teaching or journalism. She's recently gotten into writing poetry, but she also enjoys fiction writing and loves to read fantasy novels. When she's not studying for classes, she likes hanging out with her friends and being active. This is her first publication.