2023 was quite a year to be a writing teacher. Pre-ChatGPT, there wasn’t much demand for writing instructor pundits, so it was unusual for me to kick off 2023 with an interview on CBS Sunday Morning. In that interview, I made the case that I have continued to make throughout the year—that because writing is a way of figuring out what we think, we should think twice before outsourcing that process to generative AI. In the year that followed, I’ve had the opportunity to talk and write more about writing and thinking and chatbots than I could ever have anticipated. Along the way, I’ve learned so much about generative AI and about the possible futures we’re facing. I’m so grateful to have encountered so many thoughtful people who are also writing and thinking about AI. (I’ve included a short list of people to follow/read at the bottom of this post.)
There’s much we don’t know about what lies ahead. But what we do know is that the option to use AI in our writing will soon be everywhere—for email, for word processing, for social media.
With that in mind, I’d like to offer my “rules” for thinking about writing in 2024.
Rule #1: Understand how LLMs work.
This fall, I taught a writing course called “To What Problem is ChatGPT the Solution?” For their first assignment, I asked my students to explain how LLMs work to an audience of their choice. It seemed logical to me that before they could grapple with both practical and ethical questions about generative AI, they should understand how it works. After that assignment, I could see a difference in how discussions unfolded; they were more critical, more skeptical, and more knowledgeable. Some of them changed their minds about whether they would want to use generative AI in their writing; others changed their minds about when they thought they would use it.
While you may never have felt the need to do a deep dive into how spellcheck works before using it, this is different. If you’re thinking about outsourcing your writing or editing to generative AI, you should understand what’s happening when you do—and what we don’t know as well as what we do know. My students started their explorations of how LLMs work with this article by Timothy B. Lee and Sean Trott. For a brief overview, I also recommend this piece that the Financial Times published in September. If you’re going to use AI tools, you should also understand the problems of bias and hallucination in these systems. I highly recommend reading this article for a deep dive into how these systems work. You can also read a quick overview about hallucinations here. Here’s a Times article about some of the researchers looking at bias in AI.
Rule #2: Recognize when writing is thinking.
In my brief time as a writing-teacher-pundit, I’ve spent a lot of time arguing that we figure out what we think through writing. Writing is not the only way we figure out what we think, of course. But writing is an important way of thinking through a problem or question or idea. And writing multiple drafts can help us sharpen our ideas and understanding.
I’ve seen this happen so many times in my classroom over the past 20 years. It happened this fall as my students wrote about generative AI. (It’s happening to me as I write this newsletter!) I often see my students write themselves to a really interesting idea in the conclusion of a draft—and I always advise writers to look at their conclusion for a more interesting, clearer version of their main ideas.
With that in mind, here’s my rule for 2024: Before you use generative AI to write that first draft, or that revision, or anything in between, make sure you know what you might be missing by not doing the writing yourself.
Rule #3: Use Writing Feedback Carefully
I’ve been asked a lot this year about whether it’s a good idea to use a chatbot for writing feedback. I’ve talked to a number of people who have found chatbot feedback useful. But there’s also a lot of opportunity for bad feedback or feedback you just don’t know how to use. You can read about one of my experiments assessing ChatGPT feedback here. The short version: If you’re not confident that you understand what will make your writing stronger, you shouldn’t be relying on a chatbot for writing feedback.
Here's the thing: LLMs are not magic (see above, “understand how LLMs work”), and the feedback generated is not always going to be useful for what you’re trying to do. Sometimes it will be wrong. I’m working on an experiment right now with one of my students to test some of the limitations of chatbot feedback, so more on that in 2024.
But here’s what is true about asking a bot for writing feedback: as with any writing feedback, you should think about how you’re using the feedback and what you need to know to use it effectively. To use writing feedback effectively when you’re writing for work or publication, you need to know what you’re trying to do, and you need to know enough about good writing to know what makes sense and what doesn’t—and to recognize when the bot is giving you bad advice. Unlike your writing teacher or trusted colleague, the bot doesn’t actually understand you or what you’re trying to do. And it’s often surprisingly bad at basic grammar tasks (I asked it to fix parallelism multiple times and it never could), so stick with a grammar checker (which also gets these things wrong sometimes) for that.
Rule #4: Resist the idea that outsourcing your thinking is inevitable.
The AI hype machine is robust and relentless. I first fell victim to the hype in late 2022 when a researcher at a big tech company assured me that in two years no one would be taking writing classes unless they wanted to be writers. Her message: writing was pretty much over. But one year in, I’m pretty confident that prediction will not come true!
This year, we’ve seen predictions that AI will take over pretty much every task we find meaningful along with those we may not mind outsourcing. But how this all plays out is not inevitable, no matter how much any news story or tech company suggests that it is. So, what does this mean for how to think about writing now? As we begin 2024, I’ll leave you with two principles I offer my students at the beginning of each semester.
First, I tell my students that there’s no point in writing an essay if they aren’t figuring something out or learning something along the way. This principle doesn’t translate perfectly to other types of writing—sometimes you just have to write that boring email. But it translates more than we may realize. For example, I’ve helped many people write job cover letters, which may seem like a writing chore without much payoff. But often in the conversations I have with people writing those letters, a coherent story of why they would be a good fit for the position emerges. The process of writing multiple drafts turns into a process of thinking more clearly. When there is value in that writing process, we should keep doing our own writing.
Second, I tell my students about a conversation I had many years ago with the head of the fact checking department at the Atlantic magazine. She asked me a question about a piece that I could not answer. “If you’re going to put your name on something,” she said, “don’t you want to know that it’s true?” My answer to that was yes: I wanted to be able to sign off on something with the confidence that it was true. I wanted my words to matter because they were mine. I tell my students that their words should matter to them, and that I hope they will.
Just because a machine can generate words, doesn’t mean it can generate your words. In fact, the words it generates may just belong to someone else.
My wish for all of you for 2024 is that your own words will matter, both to you and to those who read them.
Some thoughtful people to read on AI and Writing
Lauren Goodlad and the team at Critical AI
This is spot on! It’s so important to have a basic understanding of AI before using it, and I totally agree about keeping our own voice and thinking, even with AI around.
Thank you for this post. It's surprising to me how many people AI writing is great and that it makes it so much easier.