What we lose when machines do the writing
If machines are doing the writing, then no one is doing the thinking.
Greetings, all!
For last Sunday’s Boston Globe Ideas section, I wrote about how artificial intelligence is getting very close to being able to do our writing for us—and why this should worry all of us. The article focuses on college students, but AI is going to play a significant role in writing for everyone, and we should all be thinking about what will change. (TL;DR: just because you can outsource your writing to a machine doesn’t mean you should.)
The article is only available on the Globe website for subscribers, so I wanted to share it with you here.
If you’ve already read the article, scroll down for some outtakes that didn’t make it into the published version, including
a machine-generated response to my claim that we should not outsource our writing to machines
some examples of GPT-3 generated writing
links to a few other interesting articles about writing and AI.
Related: People ask me all the time what I think about Grammarly and other apps that already promise to improve your writing. I’ll have more to say about that—and about what you need to know in order to use those tools effectively—in my next newsletter.
Here’s the article:
What we lose when machines do the writing
As a college writing teacher, I’ve been joking with colleagues for years about what will happen when artificial intelligence becomes competent enough to write student papers — and to grade them (the rest of us will head for the beach!). The idea that language-generating AI was going to change the way people write was a distant concern, until it wasn’t.
Since OpenAI made its language-generating model, GPT-3, available to the public last fall, I’ve been experimenting with it to see how close it can come to writing as well as my students. The interface is as easy as ordering coffee: You type a request in a text box and your text is generated in a few seconds. When I typed “write a paragraph about what college students lose if they let AI write their papers,” I had my paragraph in less than 10 seconds, for less than 10 cents:
“If college students let AI write their papers, they may miss out on important writing skills. AI cannot yet replicate the human mind, so it may not be able to capture the nuances and subtleties of writing. In addition, AI papers may not be able to accurately represent the student’s voice or style. As a result, students may end up with a paper that is less than ideal.”
Indeed, this answer is neither nuanced nor subtle, but it won’t be long before our students are able to create convincing papers in a few minutes with minimal effort — ones that do appear to be the product of critical thinking.
Schools will no doubt add turning in machine-generated papers to a list of what counts as academic misconduct, along with plagiarism and buying papers on the Internet. But I’m more interested in how we’ll convince students that just because you can outsource your writing to a machine doesn’t mean you should.
Let’s start by making a distinction between the spelling and grammar checkers that tinker with a writer’s product and GPT-3, which is poised to take over the writing process. As writing instructors tell their students, the writing process matters because writing is a form of thinking; we figure out what we think when we try to write about it. If a machine is doing the writing, then we are not doing the thinking.
In writing courses like mine, students draft essays, share them with instructors and peers, and learn how to respond to counterarguments. More often than not, when I read a student paper draft, I’ll find the most interesting and important point in the conclusion; the student had to write the rest of those paragraphs to figure out that point. When we write, we are working toward a deeper understanding of a text, a phenomenon, a problem, or a philosophical question. Our students won’t experience that by typing an assignment prompt into GPT-3. And we won’t benefit from the thinking that emerges as students try out and modify their ideas.
It only took me a few minutes of experimenting with GPT-3 before I was able to generate introductory paragraphs that mimic those my students might draft, on their own, today. When I asked GPT-3 to conjure up a thesis statement that contained an objection to an argument in Michael Sandel’s book “The Case Against Perfection,” it gave me this:
“One potential benefit of genetic engineering is that it could create a more unified and diverse community. By allowing individuals to choose their own physical and mental traits, genetic engineering could lead to a world in which people are not judged by their appearance or abilities. This would create a more tolerant and inclusive society.”
On the surface, this seems like a reasonable claim. When I assign my students an article by Sandel, we always have lively class discussions about whether society would be more equitable if everyone could choose their own traits. But if one of my students drafted this paragraph, I would have questions: I would ask the student why they think that no one would judge people for the traits they had selected. And I’d ask the student why they concluded that people, when given the choice, would opt for a diverse range of traits rather than choosing to look like TV or movie stars. The student might concede that being able to choose our traits wouldn’t necessarily lead to a less judgmental society. Or the student might argue that since there is so much societal pressure to look a certain way, it would be more equitable if everyone could look that way. No matter how the student answered those questions, they would have developed a clearer and more nuanced position on the topic.
I tell my students that writing — in the classroom, in your journal, in a memo at work — is a way of bringing order to our thinking or of breaking apart that order as we challenge our ideas. We look at the evidence around us. We consider ideas we disagree with. And we try to bring a shape to it all. Sometimes my students see the process differently. They see writing a paper as a hoop they are being asked to jump through, a way for me to evaluate them and pronounce them successful or not. In other words, they see writing solely as a product. If the end point rather than the process were indeed all that mattered, then there might be good reason to turn to GPT-3. But if, as I believe is the case, we write to make sense of the world, then the risks of turning that process over to AI are much greater.
There are many ominous science fiction stories about what might happen if we are defeated by our own machines. But the evidence suggests that rather than being conquered by machines run amok, we’re willingly outsourcing too many processes to them, including writing. And since GPT-3 isn’t actually thinking, no one will be thinking. Perhaps the most worrying outcome is that we will lose our commitment to the idea that we ought to believe what we say and write — an idea that is already under threat from disinformation campaigns and the speed at which social media moves.
Each semester, I tell my students about the magazine editor who, upon learning that I had not checked a fact in an article I was working on, said to me, “if you’re going to put your name on something, don’t you want to know that it’s true?” I tell them there’s no point in writing a paper unless writing it helps you understand why you think what you think.
Would it matter if we stopped believing what we write? I asked GPT-3.
It gave me this answer: “No it does not matter if we believe what we write.”
We’ve reached the point where we can’t easily distinguish machine writing from human writing, but we shouldn’t lose sight of the huge difference between them.
Outtakes: I have asked GPT-3 several times in the past few days to tell me why my argument is wrong. Here’s one of our exchanges (my prompt in regular text followed by GPT-3’s response in green).
While I was working on the article, I got caught up in the challenge of trying to get GPT-3 to complete different writing tasks in amusing ways. I asked it to write country songs, poems, student papers, letters, tweets, and even apologies. It’s definitely fun! But it’s fun in a “let’s talk to Siri or Alexa” kind of way, not in a “this machine has insights” kind of way. A few examples
Recommended reading: If you’re interested in what others have to say about machine writing, I highly recommend this essay, in which author Vauhini Vara uses AI to help her write about the death of her sister. For a deeper dive into writing and AI, I recommend this article by Steven Johnson.
Note: Substack recently added a new chat feature, which allows newsletter authors to start chat threads to connect authors and readers. It’s currently available only in the Substack app. If you’re interested, you can download it here. I tried it out, and it seems easy to use. I’m going to start a thread today there to talk about the issues I raised in my article, so come by if want to try out the app! Or leave a comment here: Has AI writing come to your office yet? What do you think about using AI to write?
Hi Jane, AI is help and aid for me, but when it comes down to writing and communication, there is nothing like your own voice and editing. I use Grammarly and ProWritingAid to catch my spelling and grammatical mistakes. But only rely on them for suggestions that I consider before using. I am a bit dyslexic so they might be more valuable to me than others. Both programs are superb, and I recommend both, as one will pick up errors the other misses. I have both programs as extensions on my browser and Word, so they are at work whenever I'm working. They caught several errors in this reply, but my final read-through found something needing change. So there is no replacement for your final self-editing,
I fully agree with your article. I spent some time yesterday reviewing the newest additions to the AI writing programs, just out of interest. Their only value, which is costly to the buyer, might be to an ad agency that loves to bombard potential consumers with marketing information as part of their funnel marketing strategies. For me, with some dyslexic difficulties, AI as Grammarly and ProWritingAid, have helped with grammar and spelling. These AI programs also educated me by over-learning and seeing the repeated mistakes I make.