

Discover more from Other Duties (as assigned)
“…even today, the process of hardening, the crystallization of life has not yet been completed…”
—WE, Yevgeny Zamyatin (1921)
“She is the most perfect and most obedient tool which mankind ever possessed!”
—Metropolis (1927)
“As companies like Microsoft and Meta have learned the hard way, A.I. systems can be erratic and unpredictable, or even turn creepy and dangerous.”
—“How Does ChatGPT Really Work,” Kevin Roose (2023)
What does it mean to be human? What does it mean to “write” or “compose”? The progression of technology has probably always forced us to evolve our answers to these questions. With the proliferation of relatively effective AI writing programs, I think teachers, especially teachers of writing, thinking, and research, are going to have to do a lot of soul searching, starting now.
Today was my last day as a teacher, at least for the foreseeable future. I spent about an hour of today, a scrolling through the on students’ documents, thinking a lot about the line between plagiarism and autocorrect, and trying to see who had crossed it.
ChatGPT, itself, is not an especially good writer, as my students and I discovered this semester. In a technical sense, it can write a decent “essay”— depending on how you define the word essay— and if you’re a good writer, you can probably get it to help you write a good essay. But its writing is often, not surprisingly, robotic and off-putting, especially when used by inexperienced writers who don’t yet have a command of voice, attribution, and citation.
It seems like even many AI experts are somewhat confused as to how, exactly large language models and neural networks actually function. In an open letter released last week, computer scientists and tech leaders called for a six-month pause on AI software development, in response to the release ChatGPT-4 (the one many of my students have probably been using to write mediocre but hard-to-recognize AI essays), warned that
recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?
Is this hyperbole? I have no idea, and it doesn’t seem like there is consensus on this issue by people who understand this technology best, perhaps because the technology is so new, and is evolving so quickly. It’s hard not to think about the rapid and uncontrolled early spread and mutation of the COVID-19 virus as a possible analogy. In an effort to compete with one another, major tech companies like Microsoft and Google are engaged in a kind of “arms race” to create the newest and best AI tech, and as a result there are probably few, if any, people who truly understand how the tech works and where it might plausibly go.
But what a small yet significant number of my students understood this year was that ChatGPT could, at least in the short term, make the “work” of school easier. In the last few months, I received increasing numbers of writing assignments that were either certainly or plausibly “written” by AI chatbots. Some of these I discovered with free versions of AI detectors like GPTZero, though these seemed to work less well as time went on, perhaps because of improvements in the bots, and perhaps because students also have access to these detectors and are likely using them to check successive drafts for them as they paraphrase the language initially created by the bots. Some students would likely have gotten away with this kind of paraphrasing if I didn’t require them to turn in early electronic drafts and continue to revise on those documents— a practice that predates AI chatbots, and helps with teaching revision.
I asked ChatGPT to write an introduction for this essay. Initially, I hoped it would write something passable, so that I could wait until this point in the piece to reveal that— twist— AI actually wrote it. But this is the first part what ChatGPT wrote
:As humanity hurtles forward into an era marked by unprecedented technological advancement, the evolution of Artificial Intelligence (AI) stands at the forefront of our collective imagination. The possibilities that AI technology holds for our future are seemingly boundless, promising to reshape industries, revolutionize healthcare, and enhance our daily lives. Yet, amidst the awe-inspiring potential lies a disquieting undercurrent. A recent anecdote serves as a cautionary tale, where students surreptitiously exploit AI-powered language models to effortlessly produce flawless essays, blurring the lines of academic integrity and raising pressing ethical concerns. In this essay, we delve into the plausible future of AI technology, adopting a concerned yet objective tone, and explore the potential ramifications it may unleash upon our society.
(emphasis mine)
And I know that you are too smart to think that the same person who wrote the above paragraph wrote the rest of this essay.
Here’s another attempt, slightly better, after I gave it an entire draft of my essay and asked it to write an introductory paragraph:
The rapid advancement of Artificial Intelligence (AI) has brought forth a myriad of possibilities that captivate our imagination and hold the potential to reshape our future. However, as I reflect on my last day as a teacher, I am confronted with the unsettling realization that AI, despite its capabilities, falls short in certain aspects. My students and I have experienced firsthand the limitations of AI's writing abilities, often finding its output to be robotic and lacking personality. Moreover, the intricate workings of large language models and neural networks remain enigmatic to many, even experts in the field. Recent concerns voiced by computer scientists and tech leaders further highlight the need for introspection. The escalating race among tech giants to develop ever more powerful AI systems, like ChatGPT-4, has given rise to ethical quandaries and raised questions about the loss of control over our civilization. This essay explores the intersection of AI technology and education, shedding light on the ethical implications, challenges of academic integrity, and the urgency to comprehend the far-reaching consequences that may unfold as AI becomes increasingly embedded in our society.
It’s not a great introduction. It mostly closely paraphrases points I’ll go on to make in the body of the essay, and it doesn’t really match my voice, but it does suggest that experienced writers will get more out of AI than those who are still in the early stages of learning to write authentically. I’ll leave it to you to decide whether it matches the quality of my writing.
This is perhaps the most positive aspect of the current AI technology— it provides an opportunity to discuss voice and tone with students. The paragraph GPT wrote does sound like an introductory paragraph, and it does sort of respond to my prompt— it’s debatable whether it captured the “concerned but objective tone” I asked for (in that it’s not debatable: it sounds like an advertisement for ChatGPT written by Ayn Rand). In fact, instead of trying to attempt that tone, it simply said it was adopting such a tone. It also contains a hilarious description of students using AI to “effortlessly produce flawless essays”. But it also sounds robotic and devoid of personality, probably because it hasn’t been trained on any particular person’s writing, but on an almost unimaginably huge number of “essays,” likely scraped from the Internet. And if you want to know what kinds of writers GPT is learning from, just Google “free essays” or visit a Reddit forum and read some of the writing there.
I can see a teachable moment of asking GPT to write an essay and having students compare it to a human-written essay. That might give them an opportunity to understand what GPT does well— it spells correctly, it generally uses “standard” grammar, and it “understands” how to simulate the thinking and organization that goes into a lot of good nonfiction writing. On sentence level, it’s not bad, which may be why AI detectors tend to do better the more text you feed into them.
But we didn’t have time for that this year, unfortunately, and the students seem to have discovered AI a few months after it first started making news. So rather than using it on formative writing assignments early on, they were trying to get it to write final papers and even College Board essays
.In one of the more ironic episodes of my teaching career, the first students I observed clearly using AI to write their essays were responding to a selection of prompts asking them to connect the concept of “gaslighting” to Yevgeny Zamyatin’s 1921 dystopian novel WE, which we had just read as part of a unit in which we also read an excerpt from Feed, read the short story “Openness,” and watched clips from the films Metropolis, 2001: A Space Odyssey (the famous “open the pod bay doors, HAL” scene) and Wall-E. All of these texts foreground the dangers of automation, standardization, and roboticization. Several deal explicitly with technology with sentience which replaces human beings, or with technology that blurs the line between person and machine. We had some great discussions.

But fifteen-year-olds are gonna fifteen-year-old, and they went ahead and used the robot to do their work. In a way, I understand. The pressure students, parents, and teachers attach to grades can be immense. Although the “ungrading” movement is well-intentioned and probably correct in the short term, it is very difficult to motivate students in high school to practice writing without attaching scores to the feedback. Students are often overwhelmed with work and not good at balancing and prioritizing, so when a tool seems to make their lives easier, I understand why they would be tempted.
I also agree with John Warner’s assessment that the increased student use of AI is a warning sign that we are valuing the wrong kinds of writing in school, that we are so driven by standardized assessment “data” that we often fail to help students to understand writing as a meaningful, human approach to making meaning, processing emotions and ideas and conflicts. I wrote about this last year, in response to silly proclamations by legislators— gearing up for the current phase in the “anti-woke” crusades— that students should be learning “bare facts”.
I also partly agree with Warner’s prescription for addressing the problem of student over-reliance on AI, which is essentially that we need to assign more meaningful writing assignments in schools, assignments that can’t be effectively completed by ChatGPT (however “flawlessly” to software may “believe” it is operating) because they require voice, nuance, and complex thinking.
A major obstacle to this is that, as Paul Bowers’ recent analysis of SC class sizes shows, teachers have more students than ever, and as the shortage of people willing to teach decreases, there is less and less time (and fewer people with expertise in teaching writing) to teach and assess authentic writing skills.
In my own non-AP classes this year, the average class size was more than 30 students per section. That means that for every essay assigned, I would need to read 90-100 pages per class. More systematic writing is much easier to grade this way: look for a thesis statement, count pieces of direct evidence, etc. Truly good writing, of course, doesn’t often follow simple systematic approaches, but ChatGPT, by definition, does.
So part of the solution, as always, is going to involve having more highly-skilled teachers. Will that happen with South Carolina and many other states responding to perceived educational deficits with privatization, defunding, and a kind of censorship that not only turns of potential teachers but limits the kind of nuance and complexity necessary to write things a robot can’t? Probably not. We need leaders who want to fix the system instead of burning it down, but the current GOP platform could be simplified to “burn it down” and Democrats and independents seems to be woefully slow at coming up with a more compelling alternative. As always, it is up to individual stakeholders in public education to find common cause and solidarity in order to respond.
Thank you for reading this piece! You can support this work by subscribing, upgrading to a paid subscription (which gives you access to a pretty deep archive of essays like this) and sharing with anyone who might be interested. You can also stream and buy my music here.
Note: this was the second attempt. My first prompt, “Write an introduction to an essay about the possible future of AI technology, with a concerned but objective tone. Include a two-sentence anecdote about students using the technology to cheat on essay assignments,” resulted in several paragraphs that didn’t even kind of sound like me. The follow-up prompt was, “Write an introductory paragraph to an essay about the possible future of AI technology, with a concerned but objective tone. Include a two-sentence anecdote about students using the technology to cheat on essay assignments. Do not make reference to the fact that this is an introduction to an essay.” What I think this reveals is that the better you at writing, the better GPT will “write,” up to a point.
I also tried feeding the first draft of this essay into ChatGPT, with the prompt, “Write an introductory paragraph to the following essay, without using the word ‘essay’”. It instead paraphrased my entire essay in a way that sounded very much like a student poorly plagiarizing an essay from the Internet. So in some ways it is very good at simulation.
College Board tried to address the obvious threat AI chatbots pose to its online writing portfolio system, releasing an updated statement on the use of the bots about a month before portfolio assignments were due and adding some basic (and not very effective) AI detection to its Turnitin plagiarism checker. (And if you are wondering whether there are bots which will take your AI essay and mangle up the phrasing so that it doesn’t get caught by the AI detectors, gold star for you!)