Categories
deep learning reproducibility and replication social effects of machine learning

We can save what matters about writing—at a price

Writing is a way of learning. And we can save what matters about it if we’re willing to learn something ourselves.

It’s beginning to sink in that generative AI is going to force professors to change their writing assignments this fall. Corey Robin’s recent blog post is a model of candor on the topic. A few months ago, he expected it would be hard for students to answer his assignments using AI. (At least, it would require so much work that students would effectively have to learn everything he wanted to teach.) Then he asked his 15-year-old daughter to red-team his assignments. “[M]y daughter started refining her inputs, putting in more parameters and prompts. The essays got better, more specific, more pointed.”

Perhaps not every 15-year-old would get the same result. But still. Robin is planning to go with in-class exams “until a better option comes along.” It’s a good short-term solution.

In this post, I’d like to reflect on the “better options” we may need over the long term, if we want students to do more thinking than can fit into one exam period.

If you want an immediate pragmatic fix, there is good advice out there already about adjusting writing assignments. Institutions have not been asleep at the wheel. My own university has posted a practical guide, and the Modern Language Association and Conference on College Composition and Communication have (to their credit) quickly drafted a working paper on the topic that avoids panic and makes a number of wise suggestions. A recurring theme in many of these documents is “the value of process-focused instruction” (“Working Paper,” 10).

Why focus on process? A cynical way to think about it is that documenting the writing process makes it harder for students to cheat. There are lots of polished 5-page essays out there to imitate, but fewer templates that trace the evolution of an idea from an initial insight, through second thoughts, to dialectical final draft.

Making it harder to cheat is not a bad idea. But the MLA-CCCC task force doesn’t dwell on this cynical angle. Instead they suggest that we should foreground “process knowledge” and “metacognition” because those things were always the point of writing instruction. This is much the same thesis Corey Robin explores at the end of his post when he compares writing to psychotherapy: “Only on the couch have I been led to externalize myself, to throw my thoughts and feelings onto a screen and to look at them, to see them as something other, coldly and from a distance, the way I do when I write.”

Midjourney: “a hand writing with a quill reflected in a mirror, by MC Escher, in the style of meta-representation –ar 3:2 –weird 50”

Robin’s spin on this insight is elegiac: in losing take-home essays, we might lose an opportunity to teach self-critique. The task force spins it more optimistically, suggesting that we can find ways to preserve metacognition and even ways to use LLMs (large language models) to help students think about the writing process.

I prefer their optimistic spin. But of course, one can imagine an even-more-elegiac riposte to the task force report. “Won’t AI eventually find ways to simulate critical metacognition itself, writing the (fake) process reflection along with the final essay?”

Yes, that could happen. So this is where we reach the slightly edgier spin I feel we need to put on “teach the process” — which is that, over the long run, we can only save what matters about writing if we’re willing to learn something ourselves. It isn’t a good long-term strategy for us to approach these questions with the attitude that we (professors) have a fixed repository of wisdom — and the only thing AI should ever force us to discuss is, how to convey that wisdom effectively to students. If we take that approach, then yes, the game is over as soon as a model learns what we know. It will become possible to “cheat” by simulating learning.

But if the goal of education is actually to learn new things — and we’re learning those things along with our students — then simulating the process is not something to fear. Consider assignments that take the form of an experiment, for instance. Experiments can be faked. But you don’t get very far doing so, because fake experiments don’t replicate. If a simulated experiment does reliably replicate in the real world, we don’t call that “cheating” — but “in-silico research that taught us something new.”

If humanists and social scientists can find cognitive processes analogous to experiment — processes where a well-documented simulation of learning is the same thing as learning — we will be in the enviable position Robin originally thought he occupied: students who can simulate the process of doing an assignment will effectively have completed the assignment.

I don’t think most take-home essays actually occupy that safe position yet, because in reality our assignments often ask students to reinvent a wheel, or rehearse a debate that has already been worked through by some earlier generation. A number of valid (if perhaps conflicting) answers to our question are already on record. The verb “rehearse” may sound dismissive, but I don’t mean this dismissively. It can have real value to walk in the shoes of past generations. Sometimes ontogeny does need to recapitulate phylogeny, and we should keep asking students to do that, occasionally — even if they have to do it with pencil on paper.

But we will also need to devise new kinds of questions for advanced students—questions that are hard to answer even with AI assistance, because no one knows what the answer is yet. One approach is to ask students to gather and interpret fresh evidence by doing ethnography, interviewing people, digging into archival boxes, organizing corpora for text analysis, etc. These are assignments of a more demanding kind than we have typically handed undergrads, but that’s the point. Some things are actually easier now, and colleges may have to stretch students further in order to challenge them.

“Gathering fresh evidence” puts the emphasis on empirical data, and effectively preserves the take-home essay by turning it into an experiment. What about other parts of humanistic education: interpretive reflection, theory, critique, normative debate? I think all of those matter too. I can’t say yet how we’ll preserve them. It’s not the sort of problem one person could solve. But I am willing to venture that the meta-answer is, we’ll preserve these aspects of education by learning from the challenge and adapting these assignments so they can’t be fulfilled merely by rehearsing received ideas. Maybe, for instance, language models can help writers reflect explicitly on the wheels they’re reinventing, and recognize that their normative argument requires another twist before it will genuinely break new ground. If so, that’s not just a patch for writing assignments — but an advance for our whole intellectual project.

I understand that this is an annoying thesis. If you strip away the gentle framing, I’m saying that we professors will have to change the way we think in order to respond to generative AI. That’s a presumptuous thing to say about disciplines that have been around for hundreds of years, pursuing aims that remained relatively constant while new technologies came and went.

However, that annoying thesis is what I believe. Machine learning is not just another technology, and patching pedagogy is not going to be a sufficient response. (As Marc Watkins has recently noted, patching pedagogy with surveillance is a cure worse than the disease.) This time we can only save what matters about our disciplines if we’re willing to learn something in the process. The best I can do to make that claim less irritating is to add that I think we’re up for the challenge. I don’t feel like a voice crying in the wilderness on this. I see a lot of recent signs — from the admirable work of the MLA and CCCC to books like The Ends of Knowledge (eds. Scarborough and Rudy) — that professors are thinking creatively about a wide range of recent challenges, and are capable of responding in ways that are at once critical and self-critical. Learning is our job. We’ve got this.

References

Center for Innovation in Teaching and Learning, UIUC. “Artificial Intelligence Implications in Teaching and Learning.” Champaign, IL, 2023.

MLA-CCCC Joint Task Force on Writing and AI, “MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations,” July 2023.

Robin, Corey. “How ChatGPT Changed My Plans for the Fall,” July 30, 2023.

Rudy, Seth, and Rachel Scarborough King, The Ends of Knowledge: Outcomes and Endpoints across the Arts and Sciences. London: Bloomsbury, 2023.

Watkins, Marc. “Will 2024 look like 1984?” July 31, 2023.

By tedunderwood

Ted Underwood is Professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. On Twitter he is @Ted_Underwood.

8 replies on “We can save what matters about writing—at a price”

Commenting on your own blog is kind of 2012. But I think writing this led me to a place of clarity that isn’t fully reflected in the blog post itself (in part because I know the take-home point is irritating and am phrasing it delicately).

Less delicately, AI is forcing us to be honest about

1) which assignments are asking students to rehearse existing knowledge and
2) which assignments really demand that they get beyond it.

We used to pretend a lot of take-home writing was category 2 when it was really 1. We got away with that before the internet, because students tended not to have good / quick access to a large archive of existing knowledge. Also, in the humanities, the fact that there wasn’t a single right answer to our assignments made it seem as though each essay was a unique snowflake. But in reality, these assignments were asking students to reinvent wheels. They just had a lot of flexibility about the style of wheel you ended up with.

We can’t get away with pretending any longer. Category 1 stuff is going to need to be pencil-and-paper. That probably means more in-person exams than we used to give.

But there’s room for us to explore new assignments that really *do* demand that students get beyond existing knowledge. We can create assignments like that in a variety of ways. Experimental processes and new empirical data are one relatively easy way to get beyond existing knowledge, but there may be others.

This involves hard work, but I don’t think it’s a loss. I don’t think we’re patching pedagogy because AI damaged it. I think AI is forcing us to be more honest about our own thought processes, and push them further.

I don’t think that the type of open-ended questions you suggest need be limited to ‘advanced’ students – I think people at *any* level can be encouraged and facilitated to think creatively. So much of education at all levels is devoted to trying to get the ‘learner’ to replicate what is in the head of the ‘educator’ and to do this in some ritualised format that can be mechanically assessed by human or machine. Maybe AI will actually force people to think for themselves again.

Leave a comment