The Ethics and Artistics of Generative AI

What I learned using ChatGPT to write an academic paper.

There is no preschool in the summer, so I take my three-year-old twins to the beach a lot. They like to dig holes and eat french fries. That’s what we were doing on a recent Wednesday, leaning against a massive trunk of driftwood at Golden Gardens in Seattle. Nearby, a couple of guys in their late twenties sat drinking PBRs and reading. One of them, I noticed, was working through a well-thumbed copy of The Making of the Atomic Bomb, so I asked him if they work in AI. 

“Did the mustaches give us away?”

I chinned towards his idea of breezy beach reading, having recently learned that Richard Rhodes’s 1986 profile of Manhattan Project physicists is a perennial bestseller among AI programmers. I then asked the question I’ve been itching to ask just such a person. “So. What’s your P-Doom?”

Both of them laughed, perhaps amused to hear a stranger go there so quickly. Mustache #1 explained that his P-Doom was maybe at three or four, while Mustache #2 placed his around 25.

P-Doom, short for probability of doom, represents how likely one believes the development of AI will eventually lead to the extinction of the human species. Hence the analog to nukes. P-Doom went mainstream following the release of ChatGPT, and it’s no coincidence that, around the same time, English professors everywhere got their own version of P-Doom that goes something like this. 

POV: You are in a faculty meeting or at office hours when a fellow writing instructor approaches you and asks, “What’s the probability that our students are using AI to do absolutely all of their assignments?”

Like lots of other writing teachers, the verb use was basically synonymous with cheat, so please feel free to visibly, audibly cringe now as I tell you that I recently used ChatGPT to write a paper. Before you chuck these pages down in disgust, let me state for the record that I let ChatGPT write exactly zero of the 287 sentences in my essay entitled “You in the Future Tense: Fiction, Feelings, and Selfhood in the Age of AI.”

As it turns out, using AI to write does not mean you have to let AI do any actual writing.

So, if AI didn’t write any of my paper, what did it do? 

What follows are five things I found AI to be useful for, followed by two it surprisingly wasn’t. I’ll then conclude with four very simple rules on how to write ethically and artistically with AI, if you are so inclined.

What AI Is Good For

It started out as straight oppositional research. If AI usage in my courses was going to continue creeping northward, I’d better get better at noticing. My initial intention was to just mess around, maybe ask the bot to explain how bots work, but after returning to the same conversation over the course of days and then weeks, a 200-page conversation took shape. I started to get excited. The bot and I discussed how creative writers’ obsession with sentences sounded a lot like how LLMs work, specifically how they predict the next likeliest word to appear in a sequence. From there, the conversation ranged into neuroscience, new and classical theories of emotion, insight meditation, and the epistemology of language. Suddenly four topics looked like one topic: fiction, feelings, large language models, and selfhood. All this transpired before I decided to write a paper. Which leads me to my first and perhaps favorite use of AI: it’s somebody smart to talk to about anything at any time who never gets bored. 

Another big advantage of talking with AI is that it can deepen research. Before writing my paper, I’d never heard of Constructed Emotion Theory or of Lisa Feldman Barrett. ChatGPT introduced us, recommending I check out her highly influential How Emotions Are Made as soon as I started sniffing around the idea that both human- and AI-written language, fiction, and feelings are all predictive in the same way. Without Barrett, this little notion of mine would have remained a whim, but her research provided a scientific foundation for what became my thesis.

Outlines are standard issue in research writing and all sorts of expository composition, and ChatGPT and Claude and Gemini et al are oh-so-happy to compile a smart, easy-to-follow outline based on your running conversation. 

Here’s a weirdly satisfying exercise: after chatting with a model about your thesis and sources, ask it for an outline of your project, which you can then copy-and-paste into your document. Even if you write a perfectly irrelevant Chapter I and II and III based off whichever whimsical instinct governs such matters, you can paste those pages back into the model and ask it to update your outline accordingly. It will, cheerfully. Then you can cut-and-paste the updated outline back into your doc. 

Another handy trick: every time you come across a quote or source you want to use somewhere, type it into the model and ask that it be added to your outline. 

Good ideas live on the far side of bad ones, so bad is actually good. I regularly ask ChatGPT for a list of possibilities, only to hate them all and instantly come up with one I like. For example, Section IV of my paper was going to be important, the culmination of several argumentative threads. It needed an extra good title, something snappy but still poignant. So I pasted the section into the model and asked for a dozen suggestions. The Self in Motion . . . How the Self Writes . . . Writing Under Simulation . . . Affect and the Fiction of Me. . . . None of them had it. But five seconds later, Sentencing the Self popped into my head, and I knew that was the one. This pattern seems to characterize my AI-use so far: ask the model for suggestions, hate them, then come up with one I like. 

ChatGPT’s speech-to-text application is called Wisprflow, and it’s by far the best I’ve used to date. I now use it to give long, detailed, highly personal feedback to my students on their stories and research papers. In fiction courses, for example, I typically compose a four- to five-page reaction letter for every workshop submission. Typing those comments out used to take me at least half an hour. Now, I dictate my comments into Wisprflow in a cogent but off-the-cuff tone, enter the transcript into ChatGPT and say, “Please polish this very, very lightly.” If I stutter or if someone interrupts to ask how long the bathroom’s been flooded, it knows to cut that part out. If I repeat the same rough idea three times, it generally keeps the best iteration. Otherwise, so long as I use that double “very” in my instructions, it leaves 95% of my words intact with just a touch of copy editing. What used to be a 30-minute process now takes about 10 minutes without shortchanging my students in the slightest. 

What AI Is Not So Good For 

That’s right. I said it. AI is not great at writing. Am I being contrarian? Sure. But I’m also serious. AI writing is always good. But it’s rarely very good. And it is never great. Period. 

Here’s an updated Turing Test for this LLM-narrated age. How do you know if AI-generated prose is genuinely impressive? Ask yourself this: would you put your name on it and pass it off as your own? 

In Co-Intelligence, Ethan Mollick talks about “the button,” the one-click ability to get AI to write you absolutely anything in an instant. When I started writing my paper with ChatGPT’s help, I recoiled from all those little Mephistophelean offers that appeared at the bottom of every exchange. 

“Would you like me to draft an introduction?” 

“Would you like me to compose your entire paper?” 

“Want me to accept an offer of publication from The New Yorker and negotiate for higher pay?”

After a while, I realized that if I really wanted to see what the model could do, I’d better accept some of these advances, so I started saying, “Yes, please show me a draft.”

To be fair, it’s amazing that these things work at all. Behold, a talking calculator! The torrent of sensationalism that followed ChatGPT’s release was not disingenuous. But I also must say that, ethics notwithstanding, the output was never something I would proudly attach my byline to. At first blush, the writing was fine. But when I really pulled a comb through one line after another, I found something lacking. There were little gaps in its reasoning. It repeated patterns a lot. The same syntax came up. The same cadence and mouth feel. 

Admittedly, it’s fair to say I’m being arrogant. Most creative writers I know are precious about their prose. Even before AI, whenever Grammarly or whatever embedded grammar check recommended an edit to an email, I’d be like, Shut up, robot—don’t tell me where to put my commas!  Kidding aside, a lot depends on what writing really means to you. Is it a way to transmit information, or is it an art form? In the grand scheme of the written world, most writing is just facts and most of it remains unread. Think of every instruction manual, every brochure or user agreement, every back of the cereal box or Meet the Team page buried deep on some corporate website. But if you consider yourself an artist on any level whatsoever, then you know art is all about feelings, specifically yours, and so what can you hope to gain by asking a literary bubble machine to speak on your behalf?

Investors in AI are banking that this technology will usher in a new era of hyper efficiency. As much as I love using the phrase Luxury Automated Communism, I’m not holding my breath. My very limited experience with writing with AI is that, while interesting and fruitful, it did not save me time. In fact, it almost certainly elongated my process. Using an LLM, I found I did not need to keep track of everything myself but could rely on our chat to do so, resulting in a broader canvas containing more research and more ideas. Was my finished paper better for it? Hard to know. If I did not use AI, it seems fair to assume that I might have related fewer concepts but entered a deeper flow state faster. In other words, less info, more me. Perhaps. Either way, I was surprised to see that, from start to finish, using AI to write did not quicken my process. It’s also worth pointing out that building and maintaining these models requires a nearly unfathomable amount of data, resources, and energy, making them literally the least efficient machines ever created, but that’s a whole other can of worms. 

Finally, My Four Simple Rules for Writing with AI

These are the guidelines I post in all my syllabi.

Call it the Golden—no, the Rare Earth Rule. Don’t let AI write your sentences for you. Full stop. To say you wrote something means you wrote it. The easy way to insure this: never cut-and-paste from a model into your document. 

This includes brainstorming. Don’t ask AI for help coming up with ideas. That’s on you. You need to be able to confront the abyss within. That’s what creation is all about. You need to be able to look inside and see nothing but a blank black page and feel a little bit of terror. Don’t worry. Very soon your heart will turn into a velvet top hat out of which something cool will pop. Don’t let generative AI rob you of this awful, awesome experience. A recent MIT study, “Your Brain on ChatGPT,”has shown that we lose this ability scary-fast when it’s outsourced, so don’t mess around. Do as much heavy lifting as you can sans AI before accepting one of those devilish little offers.  

Goes without saying that it is neither ethical nor artistic to prompt AI for some writing which you then pass off entirely as your own. However, it does seem in vogue these days to think of AI as your co-conspirator or wingman or colleague. Is that ethical? The devil is in the details, of course, but my short answer is sure, why not. But is it artistic? To that I give a resounding hell no. Go ahead and use AI as your copy editor. Ask it for feedback. If you get stuck on an especially convoluted sentence, request seven alternatives. Use it as a research assistant or to discuss topics that put your spouse to sleep or to find a word that rhymes with orange. But it is always a tool, never a partner. 

While we are not legally required to cite AI-generated or -assisted prose, you should be ready to disclose to any professor or editor or employer or reader exactly which parts of your writing come from where. Don’t let the cover-up be worse than the crime. The best writing, after all, has always had a confession kink. What better way to keep your P-Cheat down than that? 

Featured photo: “Ivy Mike” atmospheric nuclear test – November 1952

Dan Tremaglio

Dan Tremaglio is the author of two books of fiction, most recently the novel The Only Wolf Is Time. His stories have appeared in numerous publications, including F(r)ictionThe Master’s Review, and The Collidescope, and three times been named a finalist for the Calvino Prize. He teaches creative writing and literature at Bellevue College outside Seattle and is a senior editor for the journal Belletrist.