I first published this post in January 2024 as we were heading back to the classroom after a semester grappling with how generative AI was impacting the classroom. As we head into a new semester, I saw an uptick of visits to that post, which confirms my sense that more people are thinking about these issues now that AI tools are so widely available and schools are beginning to embrace them. More people are looking for guidance and the guidance is often too prescriptive or not prescriptive enough. I’m working on a longer set of “rules,” but in the meantime, here are my original four rules.
Rule #1: Understand how LLMs work.
Last fall I taught a writing course called “To What Problem is ChatGPT the Solution?” For their first assignment, I asked my students to explain how LLMs work to an audience of their choice. It seemed logical to me that before they could grapple with both practical and ethical questions about generative AI, they should understand how it works. After that assignment, I could see a difference in how discussions unfolded; they were more critical, more skeptical, and more knowledgeable. Some of them changed their minds about whether they would want to use generative AI in their writing; others changed their minds about when they thought they would use it.
While you may never have felt the need to do a deep dive into how spellcheck works before using it, this is different. If you’re thinking about outsourcing your writing or editing to generative AI, you should understand what’s happening when you do—and what we don’t know as well as what we do know. My students started their explorations of how LLMs work with this article by Timothy B. Lee and Sean Trott. For a brief overview, I also recommend this piece that the Financial Times published in September. If you’re going to use AI tools, you should also understand the problems of bias and hallucination in these systems. I highly recommend reading this article for a deep dive into how these systems work. You can also read a quick overview about hallucinations here. Here’s a Times article about some of the researchers looking at bias in AI.
Rule #2: Recognize when writing is thinking.
In my brief time as a writing-teacher-pundit, I’ve spent a lot of time arguing that we figure out what we think through writing. Writing is not the only way we figure out what we think, of course. But writing is an important way of thinking through a problem or question or idea. And writing multiple drafts can help us sharpen our ideas and understanding.
I’ve seen this happen so many times in my classroom over the past 20 years. It happened this fall as my students wrote about generative AI. (It’s happening to me as I write this newsletter!) I often see my students write themselves to a really interesting idea in the conclusion of a draft—and I always advise writers to look at their conclusion for a more interesting, clearer version of their main ideas.
With that in mind, here’s my AI rule: Before you use generative AI to write that first draft, or that revision, or anything in between, make sure you know what you might be missing by not doing the writing yourself.
Rule #3: Use Writing Feedback Carefully
I’ve been asked a lot this year about whether it’s a good idea to use a chatbot for writing feedback. I’ve talked to a number of people who have found chatbot feedback useful. But there’s also a lot of opportunity for bad feedback or feedback you just don’t know how to use. You can read about one of my experiments assessing ChatGPT feedback here. The short version: If you’re not confident that you understand what will make your writing stronger, you shouldn’t be relying on a chatbot for writing feedback.
Here's the thing: LLMs are not magic (see above, “understand how LLMs work”), and the feedback generated is not always going to be useful for what you’re trying to do. Sometimes it will be wrong. I’m working on an experiment right now with one of my students to test some of the limitations of chatbot feedback, so more on that in 2024.
But here’s what is true about asking a bot for writing feedback: as with any writing feedback, you should think about how you’re using the feedback and what you need to know to use it effectively. To use writing feedback effectively when you’re writing for work or publication, you need to know what you’re trying to do, and you need to know enough about good writing to know what makes sense and what doesn’t—and to recognize when the bot is giving you bad advice. Unlike your writing teacher or trusted colleague, the bot doesn’t actually understand you or what you’re trying to do. And it’s often surprisingly bad at basic grammar tasks (I asked it to fix parallelism multiple times and it never could), so stick with a grammar checker (which also gets these things wrong sometimes) for that.
Rule #4: Resist the idea that outsourcing your thinking is inevitable.
The AI hype machine is robust and relentless. I first fell victim to the hype in late 2022 when a researcher at a big tech company assured me that in two years no one would be taking writing classes unless they wanted to be writers. Her message: writing was pretty much over. But almost two years in, I’m confident she was wrong.
This year, we’ve seen predictions that AI will take over pretty much every task we find meaningful along with those we may not mind outsourcing. But how this all plays out is not inevitable, no matter how much any news story or tech company suggests that it is. So, what does this mean for how to think about writing now? As we begin the new semester, I’ll leave you with two principles I offer my students at the beginning of each semester.
First, I tell my students that there’s no point in writing an essay if they aren’t figuring something out or learning something along the way. This principle doesn’t translate perfectly to other types of writing—sometimes you just have to write that boring email. But it translates more than we may realize. For example, I’ve helped many people write job cover letters, which may seem like a writing chore without much payoff. But often in the conversations I have with people writing those letters, a coherent story of why they would be a good fit for the position emerges. The process of writing multiple drafts turns into a process of thinking more clearly. When there is value in that writing process, we should keep doing our own writing.
Second, I tell my students about a conversation I had many years ago with the head of the fact checking department at the Atlantic magazine. She asked me a question about a piece that I could not answer. “If you’re going to put your name on something,” she said, “don’t you want to know that it’s true?” My answer to that was yes: I wanted to be able to sign off on something with the confidence that it was true. I wanted my words to matter because they were mine. I tell my students that their words should matter to them, and that I hope they will.
Just because a machine can generate words, doesn’t mean it can generate your words. In fact, the words it generates may just belong to someone else.
Some thoughtful people to read on AI and writing and thinking.
Lauren Goodlad and the team at Critical AI
Benjamin Riley at Cognitive Resonance
Josh Brake, The Absent-Minded Professor
John Warner (Biblioracle)
Thank you for writing such an interesting piece and for including the original sources. I really appreciate this work.
Who is "we"?