Categories
19c 20c cultural analytics deep learning fiction plot plot

Can language models predict the next twist in a story?

While distant reading has taught us a lot about the history of fiction, it hasn’t done much yet to explain why we keep turning pages.

“Suspense” is the word we use to explain that impulse. But what is suspense? Does it require actual anxiety, or just uncertainty about what happens next? If suspense depends on not knowing what will happen, how can we enjoy re-reading familiar books? (Smuts 2009) And why do we enjoy being surprised? (Tobin 2018)

Beyond these big theoretical puzzles, there are historical questions scholars might like to ask about the way authors use chapter breaks to structure narrative revelation (Dames 2023, 219-38).

Right now, distant reading can’t fully answer any of these questions. When we want to measure surprise or novelty, for instance, we typically measure change in the verbal texture of a story from beginning to end. I made a coarse attempt of that kind in a blog post a few years ago. Other articles use better methods, and give us new ways to think about form (McGrath et al. 2018, Piper et al. 2023). But how closely does the pace of verbal change correlate with readers’ experience of uncertainty or surprise? We don’t know.

Autoregressive language models offer a tempting new angle on this problem, because they’re trained specifically to predict how a given text will continue. Intuitively, it feels like we could measure the predictability of a plot by first asking a model to continue the story, and then measuring the divergence between predicted continuation and real text. Even if this isn’t exactly how readers form expectations and experience surprise, it might begin to give us some leverage on the question.

Researchers have run a loosely similar experiment on very short stories contributed by experimental subjects (Sap et. al 2020). But scaling that up to published novels poses a challenge. For one thing, language models may not be equally good at imitating every style. A contemporary model’s failure to predict the next sentence by Jane Austen might just mean that it’s bad at channeling the Regency.

So, to factor style out of the question, let’s ask a model to predict what will happen in, say, the next three pages of a story — and then compare those predictions to its own summaries of the pages when it sees them.

Readers of a certain age will recognize this as a game Ernie invites Bert to play on “Sesame Street.”

Ernie asks Bert “what happens next” in this picture. Bert anticipates that the man will step in the pail, and disaster will ensue.

To spell the method out more precisely: we move through a novel roughly 900 words at a time. On each pass, we give a language model both a recap of earlier events, and a new 900-word passage. We ask the model to summarize the new passage, and also ask it to predict what will happen next. Then we compare its prediction to the summary it generates when it actually sees the next passage, and measure cosine distance between the two sentence embeddings. A large distance means the model did a poor job of predicting “what would happen next.”

Does this have any relation to human uncertainty?

I’m not claiming that this is a good model of the way readers experience plot. We don’t have a good model of that yet! The more appropriate question to ask is: Does this correlate at all with anything human readers do?

We can check by asking a reader to do the same thing: read roughly 900-word passages and make predictions about the future. Then we can compare the human reader’s predictions to automatically-generated summaries.

Passages were drawn from Now in November, by Josephine Johnson, and Murder is Dangerous, by Saul Levinson. n = 51 passages, Pearson’s r = .41, p < .01. Human predictions are more variable in quality than the model’s.

When I did this for two novels that were complete blanks to me, my predictions tended to diverge from the actual course of the story in roughly the same places where the model found prediction difficult. So there does seem to be some relationship between a language model’s (in)ability to see what’s coming and a human reader’s.

The image above also reveals that there are broad, consistent differences between books. For both people and models, some stories are easier to predict than others.

A reason not to trust this

Readers of this story may already anticipate the next twist—which is that of course we shouldn’t use LLMs to study uncertainty, because these models have already read many of the books we’re studying and will (presumably) already know the plot.

This is a particularly nasty problem because we don’t have a list of the books commercial models were trained on. We’re flying blind. But before we give up, let’s test how much of a problem this really poses. Researchers at Berkeley have defined a convenient test of the extent to which a model has memorized a book (Chang et al. 2023). In essence, they ask the model to fill in missing names.

Running this test, Chang et al. find that GPT-4 remembers many books in detail. Moreover, its ability to fill in masked names correlates with its accuracy on certain other tasks—like its ability to estimate date of publication. This could be a problem for questions about plot.

To avoid this problem (and also save money), I’ve been using GPT-3.5, which Chang et al. find is less prone to memorize books. But is that enough to address the problem? Let’s check. Below I’ve plotted the average divergence between prediction and summary for 25 novels on the y axis, and GPT-3.5’s ability to supply masked names in those texts on the x axis. If memorization was making prediction more accurate, we would expect to see a negative correlation: predictions’ divergence from summaries should go down as name_cloze accuracy goes up.

The y axis is average cosine distance between prediction and summary; x axis is GPT-3.5’s accuracy on the name cloze test defined in Chang et al.

25 books is not enough for a conclusive answer, but so far I cannot measure any pattern of that kind. (If anything, there is a faint trend in the opposite direction.)

In an ideal world, researchers would use language models trained on open data sets that they know and control. But until we get to an ideal world, it looks like it may be possible to run proof-of-concept experiments with things like GPT-3.5, at least if we avoid extremely famous books.

Scrutinizing the image above, readers will probably notice that the most predictable book in this sample was Zoya, by Danielle Steel. Although Steel has a reputation that may encourage disparaging inferences — see Dan Sinykin, Big Fiction, for why — I don’t think we’re in a position to draw those inferences yet. The local rhythms that make prediction possible across three pages are not necessarily what critics mean when they use “predictable” to diss a book.

So what could we learn from predicting the next three pages?

To consider one possible payoff: it might give us a handle on the way chapter-breaks, and other divides, structure the epistemic rhythms of fiction. For instance, many readers have noticed that the installments of novels originally published in magazines tend to end with an explicit mystery to ensure that you keep reading (Haugtvedt 2016 and Beekman 2017). In the first installment of Arthur Conan Doyle’s Hound of the Baskervilles (which covers two chapters), Watson and Holmes learn about a legendary curse that connects the family of the Baskervilles to a fiendish hound. In the final lines of the first installment, Holmes asks the family doctor about footprints found near the body of Sir Charles Baskerville. “A man’s or a woman’s?” Holmes asks. The doctor’s “voice sank almost to a whisper as he answered. ‘Mr Holmes, they were the footprints of a gigantic hound!'” End installment.

Novels don’t have soundtracks. But unexplained, suggestive new information is as good as a sting: “Bum – bum – BUM!”

The Hound of the Baskervilles, by Sidney Paget, 1902.

It appears that we can measure this cliffhanger effect: the serial installments of The Hound of the Baskervilles often end with a moment of heightened mystery — at least, if inability to predict the next three pages is any measure of mystery. When we measure predictive accuracy throughout the story, making four different passes to ensure we have roughly-900-word chunks aligned with all the chapter breaks, we find that predictions are farther from reality at the ends of serial installments. There is no similar effect at other chapter breaks.

The mean for breaks at serial installments is more than one standard deviation above the mean for other chapter-breaks. In spite of tiny n, this is actually p < .05.

Now, this is admittedly a cherry-picked example. So far, I have only looked at seven novels where we can distinguish the ends of serial installments from other kinds of chapter break (using data from Warhol et al). And I don’t see this pattern in all of them.

So I’m not yet making any historical argument about serialization and the rise of the cliffhanger. I’m just suggesting that it’s the kind of question someone could eventually address using this method. A doctoral student could do it, for instance, with a locally hosted model. (I don’t recommend doing it with GPT-3.5, because I dropped $150 or so on this post, and that might add up across a dissertation.) Some initial tests suggest to me that this approach will produce results significantly different than we’re getting with lexical methods.

Since I’m explicitly encouraging people to run with this, let me also say that someone actually writing a paper using this method might want to tinker with several things before trusting it:

  1. Measuring the distance between the embedding of one prediction sentence and one summary sentence is a crude way to measure expectation and surprise. Readers don’t necessarily form a single expectation about plot. Maybe it would be better to model expectation as a range of possibility?
  2. Related to this: models may need to be nudged to speculate and not just predict that current actions will continue.
  3. 900-word chunks may not be the only appropriate scale of analysis. When readers talk about narrative surprise they’re often thinking about larger arcs like “who will he marry?” or “who turns out to be the murderer?”
  4. We need a way to handle braided narratives where each chapter is devoted to a different group of characters (Garrett 1980). In a multi-plot story, the B or C plot will often not continue across a chapter break.

But we’re in a multi-plot narrative ourselves, so those problems may be solved by a different group of characters. This was just a blog post to share an idea and get people arguing about it. Tune in next time, for our thrilling conclusion. (Bum – bum – BUM!)

Code and data used for this post are available on Github.

The ideas discussed here were previously presented in Paris at a workshop on AI for the analysis of literary corpora, and in Copenhagen in a conference on generative methods in the social sciences and humanities. I’d like to thank the organizers of those events, esp. Thierry Poibeau, Anders Munk, and Rolf Lund, for stimulating conversation — and also many people in attendance, especially David Bamman, Lynn Cherny, and Meredith Martin. In writing code to query the OpenAI API, I borrowed snippets from Quinn Dombrowski (and also of course from GPT-4 itself oh brave new world &c). My thinking about 19c serialization was advanced by conversation with David Bishop and Eleanor Courtemanche, and by suggestions from Ryan Cordell and Elizabeth Foxwell on Bluesky.

References

Beekman, G. (2017) “Emotional Density, Suspense, and the Serialization of The Woman in White in All theYear Round.” Victorian Periodicals Review 50.1.

Chang, K., Cramer, M., Soni, S., & Bamman, D. (2023) “Speak, Memory: An Archaeology of Books known to ChatGPT / GPT-4,” https://arxiv.org/abs/2305.00118.

Dames, N. (2023) The Chapter: A Segmented History from Antiquity to the Twenty-First Century. Princeton University Press.

Garrett, P. (1980) The Victorian Multiplot Novel: Studies in Dialogical Form. Yale University Press.

Haugtvedt, E. (2016) “The Sympathy of Suspense: Gaskell and Braddon’s Slow and Fast Sensation Fiction in Family Magazines.” Victorian Periodicals Review 49.1.

McGrath, L., Higgins, D., & Hintze, A. (2018) “Measuring Modernist Novelty.” Journal of Cultural Analytics. https://culturalanalytics.org/article/11030-measuring-modernist-novelty

Piper, A., Xu, H., & Kolaczyk, E. D. (2023) “Modeling Narrative Revelation.” Computational Humanities Research 2023. https://ceur-ws.org/Vol-3558/paper6166.pdf

Sap, M., Horvitz, E., Choi, Y., Smith, N. A., & Pennebaker, J. (2020) “Recollection Versus Imagination: Exploring Human Memory and Cognition via Neural Language Models.” Proceedings of the 58th Annual Meeting of the ACL. https://aclanthology.org/2020.acl-main.178/

Sinykin, Dan. Big Fiction: How Conglomeration Changed the Publishing Industry and American Literature. Columbia University Press, 2023.

Smuts, A. (2009) “The Paradox of Suspense.” Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/paradox-suspense/#SusSur

Tobin, V. (2018) The Elements of Surprise: Our Mental Limits and the Satisfactions of Plot. Harvard University Press.

Warhol, Robyn, et al., “Reading Like a Victorian,” https://readinglikeavictorian.osu.edu/.

Categories
deep learning social effects of machine learning

Liberally-educated students need to be more than consumers of AI

The initial wave of controversy over large language models in education is dying down. We haven’t reached consensus about what to do yet. (Retreat to in-class exams? make tougher writing assignments? just forbid the use of AI?) But it’s clear to everyone now that the models will require some response.

In a month or so there will be a backlash to this debate, as professors discover that today’s models are still not capable of writing a coherent, footnoted, twelve-page research paper on their own. We may tell ourselves that the threat to education was overhyped, and congratulate ourselves on having addressed it.

That will be a mistake. We haven’t even begun to discuss the challenge AI poses for education.

For professors, yes, the initial disruption will be easy to address: we can find strategies that allow us to continue evaluating student work while teaching the courses we’re accustomed to teach. Problem solved. But the challenge that really matters here is a challenge for students, who will graduate into a world where white-collar work is being redefined. Some things will get easier: we may all have assistants to help us handle email. But by the same token, students will be asked to tackle bigger challenges.

Our vision of those challenges is confined right now by a discourse that treats models as paper-writing machines. But that’s hardly the limit of their capacity. For instance, models can read. So a lawyer in 2033 may be asked to “use a model to do a quick scan of new case law in these thirty jurisdictions and report back tomorrow on the implications for our project.” But then, come to think of it, a report is bulky and static. So you know what, “don’t write a report. What I really need is a model that’s prepared to provide an overview and then answer questions on this topic as they emerge in meetings over the next week.”

A decade from now, in short, we will probably be using AI not just to gather material and analyze it, but to communicate interactively with customers and colleagues. All the forms of critical thinking we currently teach will still have value in that world. It will still be necessary to ask questions about social context, about hidden assumptions, and about the uncertainty surrounding any estimate. But our students won’t be prepared to address those questions unless they also know enough about machine learning to reason about a model’s assumptions and uncertainty. At higher levels of responsibility, this will require more than being a clever prompter and savvy consumer. White-collar professionals are likely to be fine-tuning their own models; they will need to choose a base model, assess training strategies, and decide whether their models are over-confident or over-cautious.

Midjourney: “a college student looking at helpful utility robots in a department store window, HD photography, 80 mm lens –ar 17:9

The core problem here is that we can’t fully outsource thinking itself. I can trust Toyota to build me a good car, although I don’t know how fuel injection works. Maybe I read Consumer Reports and buy what they recommend? But if I’m buying a thinking process, I really need to understand what I see when I look under the hood. Otherwise “critical thinking” loses all meaning.

Academic conversation so far has not seemed to recognize this challenge. We have focused on preserving existing assignments, when we should be talking about the new courses and new assignments students will need to think critically in the 2030s.

Because AI has been framed as a collection of opaque gadgets, I know this advice will frustrate many readers. “How can we be expected to prepare students for the 2030s? It’s impossible to know the technical details of the tools that will be available. Besides, no one understands how models work. They’re black boxes. The best preparation we can provide is a general attitude of caveat emptor.”

This is an understandable response, because things moved too quickly in the past four years. We were still struggling to absorb the basic principles of statistical machine learning when statistical ML was displaced by a new generation of tools that seemed even more mysterious. Journalists more or less gave up on explaining things.

But there are principles that undergird machine learning. Statistical learning is really, at bottom, a theory of learning: it tries to describe mathematically what it means to generalize about examples. The concept of a “bias-variance tradeoff,” for instance, allows us to reason more precisely about the intuitive insight that there is some optimal level of abstraction for our models of the world.

Illustration borrowed from https://upscfever.com/upsc-fever/en/data/deeplearning2/2.html.

Deep learning admittedly makes things more complex than the illustration above implies. (In fact, understanding the nature of the generalization performed by LLMs is still an exciting challenge — see the first few minutes of this recent talk by Ilya Sutskever for an example of reflection on the topic.) But if students are going to be fine-tuning their own models, they will definitely need at least a foundation in concepts like “variance” and “overfitting.” A basic course on statistical learning should be part of the core curriculum.

I might go a little further, and suggest that professors in every department are going to want reflect on principles and applications of machine learning, so we can give students the background they need to keep thinking critically about our domain of expertise in a world where some (not all) aspects of reading and analysis may be automated.

Is this a lot of work? Yes. Are we already overburdened, and should we have more support? Yes and yes. Professors have already spent much of their lives mastering one field of expertise; asking them to pick up the basics of another field on the fly while doing their original jobs is a lot. So adapting to AI will happen slowly and it will be imperfect and we should all cut each other slack.

But to look on the bright side: none of this is boring. It’s not just a technical hassle to fend off. There are fundamental intellectual challenges here, and if we make it through these rapids in one piece, we’re likely to see some new things.

Categories
deep learning reproducibility and replication social effects of machine learning

We can save what matters about writing—at a price

It’s beginning to sink in that generative AI is going to force professors to change their writing assignments this fall. Corey Robin’s recent blog post is a model of candor on the topic. A few months ago, he expected it would be hard for students to answer his assignments using AI. (At least, it would require so much work that students would effectively have to learn everything he wanted to teach.) Then he asked his 15-year-old daughter to red-team his assignments. “[M]y daughter started refining her inputs, putting in more parameters and prompts. The essays got better, more specific, more pointed.”

Perhaps not every 15-year-old would get the same result. But still. Robin is planning to go with in-class exams “until a better option comes along.” It’s a good short-term solution.

In this post, I’d like to reflect on the “better options” we may need over the long term, if we want students to do more thinking than can fit into one exam period.

If you want an immediate pragmatic fix, there is good advice out there already about adjusting writing assignments. Institutions have not been asleep at the wheel. My own university has posted a practical guide, and the Modern Language Association and Conference on College Composition and Communication have (to their credit) quickly drafted a working paper on the topic that avoids panic and makes a number of wise suggestions. A recurring theme in many of these documents is “the value of process-focused instruction” (“Working Paper,” 10).

Why focus on process? A cynical way to think about it is that documenting the writing process makes it harder for students to cheat. There are lots of polished 5-page essays out there to imitate, but fewer templates that trace the evolution of an idea from an initial insight, through second thoughts, to dialectical final draft.

Making it harder to cheat is not a bad idea. But the MLA-CCCC task force doesn’t dwell on this cynical angle. Instead they suggest that we should foreground “process knowledge” and “metacognition” because those things were always the point of writing instruction. This is much the same thesis Corey Robin explores at the end of his post when he compares writing to psychotherapy: “Only on the couch have I been led to externalize myself, to throw my thoughts and feelings onto a screen and to look at them, to see them as something other, coldly and from a distance, the way I do when I write.”

Midjourney: “a hand writing with a quill reflected in a mirror, by MC Escher, in the style of meta-representation –ar 3:2 –weird 50”

Robin’s spin on this insight is elegiac: in losing take-home essays, we might lose an opportunity to teach self-critique. The task force spins it more optimistically, suggesting that we can find ways to preserve metacognition and even ways to use LLMs (large language models) to help students think about the writing process.

I prefer their optimistic spin. But of course, one can imagine an even-more-elegiac riposte to the task force report. “Won’t AI eventually find ways to simulate critical metacognition itself, writing the (fake) process reflection along with the final essay?”

Yes, that could happen. So this is where we reach the slightly edgier spin I feel we need to put on “teach the process” — which is that, over the long run, we can only save what matters about writing if we’re willing to learn something ourselves. It isn’t a good long-term strategy for us to approach these questions with the attitude that we (professors) have a fixed repository of wisdom — and the only thing AI should ever force us to discuss is, how to convey that wisdom effectively to students. If we take that approach, then yes, the game is over as soon as a model learns what we know. It will become possible to “cheat” by simulating learning.

But if the goal of education is actually to learn new things — and we’re learning those things along with our students — then simulating the process is not something to fear. Consider assignments that take the form of an experiment, for instance. Experiments can be faked. But you don’t get very far doing so, because fake experiments don’t replicate. If a simulated experiment does reliably replicate in the real world, we don’t call that “cheating” — but “in-silico research that taught us something new.”

If humanists and social scientists can find cognitive processes analogous to experiment — processes where a well-documented simulation of learning is the same thing as learning — we will be in the enviable position Robin originally thought he occupied: students who can simulate the process of doing an assignment will effectively have completed the assignment.

I don’t think most take-home essays actually occupy that safe position yet, because in reality our assignments often ask students to reinvent a wheel, or rehearse a debate that has already been worked through by some earlier generation. A number of valid (if perhaps conflicting) answers to our question are already on record. The verb “rehearse” may sound dismissive, but I don’t mean this dismissively. It can have real value to walk in the shoes of past generations. Sometimes ontogeny does need to recapitulate phylogeny, and we should keep asking students to do that, occasionally — even if they have to do it with pencil on paper.

But we will also need to devise new kinds of questions for advanced students—questions that are hard to answer even with AI assistance, because no one knows what the answer is yet. One approach is to ask students to gather and interpret fresh evidence by doing ethnography, interviewing people, digging into archival boxes, organizing corpora for text analysis, etc. These are assignments of a more demanding kind than we have typically handed undergrads, but that’s the point. Some things are actually easier now, and colleges may have to stretch students further in order to challenge them.

“Gathering fresh evidence” puts the emphasis on empirical data, and effectively preserves the take-home essay by turning it into an experiment. What about other parts of humanistic education: interpretive reflection, theory, critique, normative debate? I think all of those matter too. I can’t say yet how we’ll preserve them. It’s not the sort of problem one person could solve. But I am willing to venture that the meta-answer is, we’ll preserve these aspects of education by learning from the challenge and adapting these assignments so they can’t be fulfilled merely by rehearsing received ideas. Maybe, for instance, language models can help writers reflect explicitly on the wheels they’re reinventing, and recognize that their normative argument requires another twist before it will genuinely break new ground. If so, that’s not just a patch for writing assignments — but an advance for our whole intellectual project.

I understand that this is an annoying thesis. If you strip away the gentle framing, I’m saying that we professors will have to change the way we think in order to respond to generative AI. That’s a presumptuous thing to say about disciplines that have been around for hundreds of years, pursuing aims that remained relatively constant while new technologies came and went.

However, that annoying thesis is what I believe. Machine learning is not just another technology, and patching pedagogy is not going to be a sufficient response. (As Marc Watkins has recently noted, patching pedagogy with surveillance is a cure worse than the disease.) This time we can only save what matters about our disciplines if we’re willing to learn something in the process. The best I can do to make that claim less irritating is to add that I think we’re up for the challenge. I don’t feel like a voice crying in the wilderness on this. I see a lot of recent signs — from the admirable work of the MLA and CCCC to books like The Ends of Knowledge (eds. Scarborough and Rudy) — that professors are thinking creatively about a wide range of recent challenges, and are capable of responding in ways that are at once critical and self-critical. Learning is our job. We’ve got this.

References

Center for Innovation in Teaching and Learning, UIUC. “Artificial Intelligence Implications in Teaching and Learning.” Champaign, IL, 2023.

MLA-CCCC Joint Task Force on Writing and AI, “MLA-CCCC Joint Task Force on Writing and AI Working Paper: Overview of the Issues, Statement of Principles, and Recommendations,” July 2023.

Robin, Corey. “How ChatGPT Changed My Plans for the Fall,” July 30, 2023.

Rudy, Seth, and Rachel Scarborough King, The Ends of Knowledge: Outcomes and Endpoints across the Arts and Sciences. London: Bloomsbury, 2023.

Watkins, Marc. “Will 2024 look like 1984?” July 31, 2023.

Categories
deep learning fiction

Using GPT-4 to measure the passage of time in fiction

Language models have been compared to parrots, but the bigger danger is that they turn people into parrots. A student who asks for “a paper about Middlemarch,” for instance, will get a pastiche loosely based on many things in the model’s training set. This may not count as plagiarism, but it won’t produce anything new.

But there are ways to use language models actively and creatively. We can select evidence to be analyzed, put it in a prompt, and specify the questions to be asked. Used this way, language models can create new knowledge that didn’t exist when they were trained. There are many ways to do this, and people may eventually get quite creative. But let’s start with a familiar task, so we can evaluate the results and decide whether language models really help. An obvious place to start is “content analysis”—a research method that analyzes hundreds or thousands of documents by posing questions about specific themes.

Below I work through a simple example of content analysis using the OpenAI API to measure the passage of time in fiction (see this GitHub repo for code). To spoil the suspense: I find that for this task, language models add something valuable, not just because they’re more accurate than older ways of doing this at scale but because they explain themselves better.

Why measure time in fiction?

Researchers already have several good ways to automate content analysis. Named entity extraction addresses certain kinds of questions. Topic modeling addresses others.

But there are also tricky questions that remain hard to answer by counting words. In 2017, for instance, I started to wonder how much time passes, on average, across a page of a novel. Literary-critical tradition suggested that there had been a pretty stable balance between “scene” (minute-by-minute description) and “summary” (which may cover weeks or years) until modernists started to leave out the summary and make every page breathlessly immediate [1]. But when I sat down with two graduate students (Sabrina Lee and Jessica Mercado) to manually characterize a thousand passages from fiction, we found instead a long trend. The average length of time represented in 250 words of fiction had been getting steadily shorter since the early eighteenth century. There was a trend toward immediacy, in other words, but modernism didn’t begin the trend [2].


The average length of time described in 250 words of narration. Y axis is logarithmic. The shaded ribbon represents a 95% confidence interval for the dashed curve, which is itself calculated by loess regression. From Underwood, “Why Literary Time is Measured in Minutes,” p. 352.

How well do bag of-words methods estimate time?

We decided to characterize passages manually because references to time in fiction can’t always be taken literally. If a character thinks (or says) “wow, it’s been thirty years but feels like yesterday,” you don’t want to conclude that thirty years have passed on the page. So word-counting seemed risky. Even as human readers we often found it hard to decide how much time was passing. But when a passage was read by two different people, our estimates agreed with each other well enough to conclude that “fictive time” was a meaningful construct, if not a precise one (r = .74 on log-transformed durations).

A year after we published our paper, Greg Yauney showed that word-counting methods can do an acceptable job of estimating time [3]. He trained a bag-of-words model on the passages we had labeled, and applied it to a new set of passages labeled by new readers. The model-predicted durations correlated with human estimates at r = .35. While this is much lower than inter-human agreement, the model was stable enough to precisely measure a trend (across thousands of books) that matched the trend we had sketched using laborious human reading of a hundred books.

From Yauney, Underwood, and Mimno, “Computational Prediction of Elapsed Narrative Time.”

Replicating this on our old data, I get a slightly higher correlation between models and readers (r = .49 on log-transformed durations), perhaps because the data I’m using was produced by readers who compared notes for several weeks to maximize agreement. But since this is still lower than .74, bag-of-words models are definitely less good at estimating time than human readers.

Can we train LLMs to estimate time?

To get a large language model to answer a question, you first need to make sure it understands the question. As Simon Willison helpfully explains, the way to train a chat model through the API is to demonstrate the interaction you want by providing a series of imagined exchanges between a “user” and an “assistant” [4]. These aren’t real replies from the assistant but ego ideals you’re providing to teach it how to behave.

Here’s the instruction I had the “user” give:

Read the following passage of fiction. Then do five things.
 
1: Briefly summarize the passage.
2: Reason step by step to decide how much time is described in the passage. If the passage doesn't include any explicit reference to time, you can guess how much time the events described would have taken. Even description can imply the passage of time by describing the earlier history of people or buildings. But characters' references to the past or future in spoken dialogue should not count as time that passed in the scene. Report the time using units of years, weeks, days, hours, or minutes. Do not say zero or N/A.
3: If you described a range of possible times in step 2 take the midpoint of the range. Then multiply to convert the units into minutes.
4: Report only the number of minutes elapsed, which should match the number in step 3. Do not reply N/A.
5: Given the amount of speculation required in step 2, describe your certainty about the estimate--either high, moderate, or low.
The passage follows: <omitting this to save space>

You’ll notice that I’m using chain-of-thought prompting, building slowly from summary to inference to an estimated number of minutes. You may also notice I tell the model twice “don’t reply N/A.” I found the model was pretty cautious about speculating and needed to be encouraged to make assumptions and guesses. But to be honest, it’s possible that the model was right to be cautious! Duration is hard to estimate in some cases, and maybe we humans should have allowed N/A responses when we did this.

Here’s an example of an ego-ideal reply I provided to teach the “assistant” how to respond. This was responding to a passage from The Big Sleep.

1: A detective wakes up 'the next morning,' looks out a window for an undefined time, drinks (and presumably needs to make) two cups of coffee, then shaves and showers and gets dressed before stepping out his front door and seeing a car.
2: Making coffee, showering, and getting dressed take at least an hour. There's some ambiguity about whether to count the implicit reference to yesterday (since this is 'the next morning') as time elapsed in the passage, but let's say no, since yesterday is not actually described. So, an hour to 90 minutes.
3: 1.25 hours have elapsed. Multiplying by 60 minutes an hour that's 75 minutes.
4: 75 minutes.
5: Low confidence, because of ambiguity about a reference to the previous day.

Using some code generously shared by Quinn Dombrowski, I gave the model four query-reply sequences like this, then asked it to characterize a new passage with the same instructions.

To assess performance on this simple task, I just extracted the minutes reported in step 4 and compared them to our human estimates from 2017. Other forms of content analysis might require the model to edit or mark up the text provided in the prompt. That’s doable, but I thought I would start with something simple.

I added step 5 (allowing the model to describe its own confidence) because in early experiments I found the model’s tendency to editorialize extremely valuable. The answers to step 5 were also fun to read as replies scrolled up the page, because my new sorcerer’s assistant complained volubly about the ambiguity of its task. (“30 minutes. Low confidence, as the passage is more focused on the poetic and symbolic aspects of the scene rather than providing a clear sense of time.”). In some cases it refused the task altogether and threw the question about confidence back in my face. (“N/A. High confidence that no specific amount of time is described.”) This capacity for backtalk is a feature not a bug: I learned a lot from it.

But after I adjusted my prompt to address the ambiguities the assistant correctly pointed out, complete refusal to answer the question was rare.

How well does GPT-4 estimate time?

I had the Turbo model code 483 passages. Its predictions correlated with human estimates at r = .59. I only asked GPT-4 to code 121 passages (because it’s more expensive to run than Turbo), and it achieved r = .68. This is not as good as inter-human agreement (.74), but it’s closer to human readers than to bag of words models (.35 – .49). And of course GPT-4 does the work more quickly than human readers. It took the three of us several months to generate this data, but my LLM experiment was run in a couple of days. Plus, given an API, large language models are easier to use than other forms of machine learning: the main challenge is to describe your question systematically. This is a method that could realistically be used by researchers with relatively little programming experience.

The total cost to my OpenAI account was $22. Of that amount, about $8 was testing the prompts and running on the (cheaper) Turbo API. The final run on GPT-4 cost $14; if I had completely replicated our work from 2017 on GPT-4 it would have cost $140. So a project like this will not bankrupt researchers. But reliance on OpenAI is still a bad idea in the long run. Universities should develop their own models and APIs, both for reasons of institutional autonomy/survival (!!😬!!) and so that researchers can inspect and tune the models.

However, one could exaggerate the problems of interpretability created by a closed model. Watching the model think out loud about each passage actually gave me a better sense of how results were produced than I could extract from my own records of human research in 2017. (See below for an example from GPT-4: human readers don’t usually document their thought process this verbosely.) Also, because human beings can’t erase their own memories, it’s not possible for us to “repeat the same question with slightly different language.” If I try that on myself, I’m going to remember what I thought before and probably think it again. But a model can wake up each moment with eternal sunshine of the spotless mind.

ABRIDGED PASSAGE FROM Jane Eyre: And signs, for aught we know, may be but the sympathies of Nature with man. When I was a little girl, only six years old, I one night heard Bessie Leaven say to Martha Abbot that she had been dreaming about a little child ... Of late I had often recalled this saying and this incident; for during the past week scarcely a night had gone over my couch that had not brought with it a dream of an infant, which I sometimes hushed in my arms, sometimes dandled on my knee, sometimes watched playing with daisies on a lawn, or again, dabbling its hands in running water. It was a wailing child this night, and a laughing one the next: now it nestled close to me, and now it ran from me; but whatever mood the apparition evinced, whatever aspect it wore, it failed not for seven successive nights to meet me the moment I entered the land of slumber.

ASSISTANT REPLIES. 1: The narrator recalls a saying about dreaming of children being a sign of trouble and remembers an incident from her childhood. She then describes her recent dreams of infants over the past week. 
2: The passage mentions the narrator's childhood memory, which happened when she was six years old, and the past week of dreams. The time elapsed in the passage is the week of dreams. 
3: 1 week has elapsed. At seven days a week that's 7 days. 24 hours in a day, so 168 hours. 60 minutes in an hour, so 10080 minutes. 
4: 10080 minutes. 
5: Moderate confidence, as the childhood memory is not counted as time passing in the scene.

What no model can do

While I think large language models will be tremendously useful assistants in content analysis, I don’t think we can dispense with multiple human readers. As David Bamman explains in a good Twitter thread, the core challenges of this work are “a. coming up with a new construct to measure, b. demonstrating that it can be measured, and c. showing that there’s value in doing so.” Intersubjective agreement between human beings is still the only way to know that we have addressed those questions.

That’s why I wouldn’t feel comfortable just making up my own definition of, say, “suspense,” teaching a model to measure it, and then running the model across a thousand books. The problem with that approach is not that LLMs have measurement error. (As we’ve seen in Greg Yauney’s paper, measurement methods with much more error can be productive.) The problem is that it isn’t clear what a single researcher’s construct means in the first place. To be confident that we’re measuring something called “suspense” we need to show that multiple people recognize it as suspense. And in spite of all the rhetorical personification in this blog post (which I trust you have understood rhetorically), a model is not a separate person. So a project of this kind still needs to start by getting several human beings to read systematically and compare notes, before the human codebook is translated into a prompt.

On the other hand, I do think language models will allow us to pose questions that are currently hard to pose at scale. Questions about plot and character, for instance, require reading a whole book while applying delicate decision criteria that may be hard to remember and keep stable across many books. Language models will help here not just because they’re fast, but because they can provide a relatively stable yardstick by which to measure slippery concepts, even at a modest scale of analysis.

This is a preliminary report on a very strange world. For me, the most surprising take-away from this experiment was not that deep learning is more accurate than statistical NLP, but that it may also be in some ways more interpretable. Because a language model has to think out loud, it tends to automatically document its own reasoning. This is useful even when (or especially when) the model doesn’t think you’ve defined the construct it’s supposed to measure clearly enough.

References

[1] Gérard Genette, Narrative Discourse: An Essay in Method, trans. Jane E. Lewin (Ithaca: Cornell Univ. Press, 1980), 97.

[2] Ted Underwood, “Why Literary Time is Measured in Minutes,” New Literary History 85.2 (2018) 351-65.

[3] Greg Yauney, Ted Underwood, and David Mimno, “Computational Prediction of Elapsed Narrative Time” (2019).

[4] Simon Willison, “A Simple Python Wrapper for the ChatGPT API” (2023).