Categories
disciplinary history interpretive theory

We’re probably due for another discussion of Stanley Fish

In 2017, models of literary language can expand to become models of interpretive communities.

I think I see an interesting theoretical debate over the horizon. The debate is too big to resolve in a blog post, but I thought it might be narratively useful to foreshadow it—sort of as novelists create suspense by dropping hints about the character traits that will develop into conflict by the end of the book.

Basically, the problem is that scholars who use numbers to understand literary history have moved on from Stanley Fish’s critique, without much agreement about why or how. In the early 1970s, Fish gave a talk at the English Institute that defined a crucial problem for linguistic analysis of literature. Later published as “What Is Stylistics, and Why are They Saying Such Terrible Things About It?”, the essay focused on “the absence of any constraint” governing the move “from description to interpretation.” Fish takes Louis Milic’s discussion of Jonathan Swift’s “habit of piling up words in series” as an example. Having demonstrated that Swift does this, Milic concludes that the habit “argues a fertile and well stocked mind.” But Fish asks how we can make that sort of inference, generally, about any linguistic pattern. How do we know that reliance on series demonstrates a “well stocked mind” rather than, say, “an anal-retentive personality”?

The problem is that isolating linguistic details for analysis also removes them from the context we normally use to give them a literary interpretation. We know what the exclamation “Sad!” implies, when we see it at the end of a Trumpian tweet. But if you tell me abstractly that writer A used “sad” more than writer B, I can’t necessarily tell you what it implies about either writer. If I try to find an answer by squinting at word lists, I’ll often make up something arbitrary. Word lists aren’t self-interpreting.

Thirty years passed; the internet got invented. In the excitement, dusty critiques from the 1970s got buried. But Fish’s argument was never actually killed, and if you listen to the squeaks of bats, you hear rumors that it still walks at night.

Or you could listen to blogs. This post is partly prompted by a blogged excerpt from a forthcoming work by Dennis Tenen, which quotes Fish to warn contemporary digital humanists that “a relation can always be found between any number of low-level, formal features of a text and a given high-level account of its meaning.” Without “explanatory frameworks,” we won’t know which of those relations are meaningful.

Ryan Cordell’s recent reflections on “machine objectivity” could lead us in a similar direction. At least they lead me in that direction, because I think the error Cordell discusses—over-reliance on machines themselves to ground analysis—often comes from a misguided attempt to solve the problem of arbitrariness exposed by Fish. Researchers are attracted to unsupervised methods like topic modeling in part because those methods seem to generate analytic categories that are entirely untainted by arbitrary human choices. But as Fish explained, you can’t escape making choices. (Should I label this topic “sadness” or “Presidential put-downs”?)

I don’t think any of these dilemmas are unresolvable. Although Fish’s critique identified a real problem, there are lots of valid solutions to it, and today I think most published research is solving the problem reasonably well. But how? Did something happen since the 1970s that made a difference? There are different opinions here, and the issues at stake are complex enough that it could take decades of conversation to work through them. Here I just want to sketch a few directions the conversation could go.

Dennis Tenen’s recent post implies that the underlying problem is that our models of form lack causal, explanatory force. “We must not mistake mere extrapolation for an account of deep causes and effects.” I don’t think he takes this conclusion quite to the point of arguing that predictive models should be avoided, but he definitely wants to recommend that mere prediction should be supplemented by explanatory inference. And to that extent, I agree—although, as I’ll say in a moment, I have a different diagnosis of the underlying problem.

It may also be worth reviewing Fish’s solution to his own dilemma in “What Is Stylistics,” which was that interpretive arguments need to be anchored in specific “interpretive acts” (93). That has always been a good idea. David Robinson’s analysis of Trump tweets identifies certain words (“badly,” “crazy”) as signs that a tweet was written by Trump, and others (“tomorrow,” “join”) as signs that it was written by his staff. But he also quotes whole tweets, so you can see how words are used in context, make your own interpretive judgment, and come to a better understanding of the model. There are many similar gestures in Stanford LitLab pamphlets: distant readers actually rely quite heavily on close reading.

My understanding of this problem has been shaped by a slightly later Fish essay, “Interpreting the Variorum” (1976), which returns to the problem broached in “What Is Stylistics,” but resolves it in a more social way. Fish concludes that interpretation is anchored not just in an individual reader’s acts of interpretation, but in “interpretive communities.” Here, I suspect, he is rediscovering an older hermeneutic insight, which is that human acts acquire meaning from the context of human history itself. So the interpretation of culture inevitably has a circular character.

One lesson I draw is simply, that we shouldn’t work too hard to avoid making assumptions. Most of the time we do a decent job of connecting meaning to an implicit or explicit interpretive community. Pointing to examples, using word lists derived from a historical thesaurus or sentiment dictionary—all of that can work well enough. The really dubious moves we make often come from trying to escape circularity altogether, in order to achieve what Alan Liu has called “tabula rasa interpretation.”

But we can also make quantitative methods more explicit about their grounding in interpretive communities. Lauren Klein’s discussion of the TOME interface she constructed with Jacob Eisenstein is a good model here; Klein suggests that we can understand topic modeling better by dividing a corpus into subsets of documents (say, articles from different newspapers), to see how a topic varies across human contexts.

Of course, if you pursue that approach systematically enough, it will lead you away from topic modeling toward methods that rely more explicitly on human judgment. I have been leaning on supervised algorithms a lot lately—not because they’re easier to test or more reliable than unsupervised ones—but because they explicitly acknowledge that interpretation has to be anchored in human history.

At a first glance, this may seem to make progress impossible. “All we can ever discover is which books resemble these other books selected by a particular group of readers. The algorithm can only reproduce a category someone else already defined!” And yes, supervised modeling is circular. But this is a circularity shared by all interpretation of history, and it never merely reproduces its starting point. You can discover that books resemble each other to different degrees. You can discover that models defined by the responses of one interpretive community do or don’t align with models of another. And often you can, carefully, provisionally, draw explanatory inferences from the model itself, assisted perhaps by a bit of close reading.

I’m not trying to diss unsupervised methods here. Actually, unsupervised methods are based on clear, principled assumptions. And a topic model is already a lot more contextually grounded than “use of series == well stocked mind.” I’m just saying that the hermeneutic circle is a little slipperier in unsupervised learning, easier to misunderstand, and harder to defend to crowds of pitchfork-wielding skeptics.

In short, there are lots of good responses to Fish’s critique. But if that critique is going to be revived by skeptics over the next few years—as I suspect—I think I’ll take my stand for the moment on supervised machine learning, which can explicitly build bridges between details of literary language and social contexts of reception.  There are other ways to describe best practices: we could emphasize a need to seek “explanations,” or avoid claims of “objectivity.” But I think the crucial advance we have made over the 1970s is that we’re no longer just modeling language; we can model interpretive communities at the same time.

Photo credit: A school of yellow-tailed goatfish, photo for NOAA Photo Library, CC-BY Dwayne Meadows, 2004.

Postscript July 15: Jonathan Armoza points out that Stephen Ramsay wrote a post articulating his own, more deformative response to “What is Stylistics” in 2012.

By tedunderwood

Ted Underwood is Professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. On Twitter he is @Ted_Underwood.

14 replies on “We’re probably due for another discussion of Stanley Fish”

I’ve been thinking about Fish’s arguments for years, though mostly on the matter of description. Fish, as you know, argues that description is every bit as subject to interpretive will as is interpretation (though that’s not quite how he put it). I was hoping somehow to evade that point, but have pretty much decided that it can’t be evaded. However one might distinguish between description and interpretation (or between description and explanation), that’s not how you do it. (FWIW I’ve got some more thoughts along those lines in this blog post, where I bring Michael Bérubé to bear against Fish.)

But, as a contrasting example to think about, consider Watson and Crick 1953, in which they assert: “We wish to put forward a radically different structure for the salt of deoxyribose nucleic acid. This structure has two helical chains each coiled round the same axis (see diagram).” That paper, of course, is one of the most important in 20th century science.

I note that, in the first place, the result is (merely) descriptive in nature: this is the shape of the molecule. Thus it’s pretty much the same kind of statement as: dogs have four limbs, a tail, and a head. That second statement is utterly trivial, and pretty much uncontested, while the first is deep and, as far as I know, uncontested as well. Why the different status of the two statements? The gross anatomy of dogs is something one can identify by visual inspection and counting. “Seeing” the structure of DNA molecules is considerably more difficult. Several competing labs were working at it over some period of years, I believe.

I don’t really know how they did it–I suppose a basic account is available at, say, Wikipedia. Still, let me spin a little tale. First you’ve got to prepare your crystalized DNA salt. Then you’ve got to bombard it with X-Rays and capture the dispersion patterns on a photographic plate or film. You then figure out what structure the crystal would have to have in order to produce the observed patterns. That strikes me as involving a fair amount of machinery and inference.

And yet, once the double-helix structure had been proposed, it stuck. As far as I know, no one now contests that structure and quite a bit of science and technology has been built on it. Is it the machinery that made it work? No. It was certainly necessary; without it there wouldn’t have been any observations. And it had to be in good working order. It was also necessary to reason about the observations made with the aid of that machinery. The process of reasoning from the patterns on the photographic plates to the molecular structure was surely mathematical. But it can’t have been slam-dunk obvious, otherwise arriving at the correct structure wouldn’t have been a challenging task.

That mathematics would have been about the behavior of electromagnetic radiation and, I suppose, we could think about that as providing an explanatory framework. The inferred structure of the molecule explains why we get the observed patterns. But that explanatory framework is within the realm of physics, and is only incidental to biology. That framework doesn’t tell us anything about the function and behavior of the DNA molecule; we’re just using it to create a description of its structure.

Now, I’ve been thinking about this example in the context of descriptive analysis of the formal properties of individual texts, not distant reading. In that context I conclude that: 1) description is not at all obvious but that 2) even in difficult circumstances it is possible to reach intersubjective agreement. What’s this have to do with distant reading? It does seem to me that much of the work is descriptive in character. The description may take the form of a statistical distribution, which is examined via images, but those mathematical and visual objects are nonetheless descriptive. Description isn’t purely verbal. Getting to that distribution, of couse, is not at all simple.

As for intersubjective agreement, there’s no way to engineer that from some transcendental point of view. You’ve got to run your experiments and present your results to your peers. And then you’ll work it out amongst yourselves. If there’s something there, you’ll reach agreement. Otherwise, you’ll just haggle endlessly.

But you’re never going to convince someone whose prior beliefs tell them that it’s all subjective and therefore your work is pointless.

I like your emphasis on intersubjective agreement. And in the end, I agree, that seems to be an empirical problem.

That’s why I’m perfectly happy with the 1976 version of Fish that says “everything rests on interpretive communities.” I find that in *practice*, smart hominid communities working in good faith and paying attention to evidence can usually get their interpretive acts together and find enough common ground to move forward.

To be sure, it’s not *guaranteed* to work! I’m having real difficulty finding common ground with Fox News Corp at the moment. But still, it’s a pragmatic question; as you say, “there’s no way to engineer that from a transcendal point of view.”

I’d like to say a bit more about intersubjective agreement. It’s my impression that most discussions of epistemology and of philosophy of science assume a more or less impersonal and unified epistemic agent. Intersubjectivity is just a side-effect of ‘getting it right’. I don’t think that’s adequate. I think that communication between (quasi-)independent epistemic agents (what a term!) is intrinsic.

Now, just how one would make good argument on that point is not obvious to me. Let me just spitball something to give a flavor of what I’m thinking: On the one hand (A) we have an epistemic agent with N units of computational power. On the other hand (B), we have 10 independent epistemic agents each with N/10 units of computational power. Thus in both cases we’ve got the same number of computational units (whatever they are).

My argument is that, over the long haul, case B will be more effective in knowing the world. Why? Because we’ve got 10 interacting points of view and they’ve got to come to agreement among them. In case A we’ve only got one agent, one point of view. In any given situation our agent may be undecided over several alternatives. But reaching a decision in that situation is (somehow) different from ten independent agents reaching agreement in the same situation.

Of course the world is large complicated and various. So are the possibilities of thought. Intersubjective agreement is not always possible.

That’s almost a testable hypothesis. You’ve probably encountered some research on the advantage of “ensemble methods,” right? That’s not a full model of whst you mean by a “subject” or “agent,” but it hints that when we do provide a full model, it’s clearly going to be true that there are advantages to plurality.

Of course, it’s more complicated than that. Isn’t it always?

I want to go back to description, and its difference from interpretation and explanation, three of the four categories Sharon Marcus uses in “Erich Auerbach’s Mimesis and the Value of Scale” (Modern Language Quarterly 77.3, 2016, 297-319); evaluation is the fourth. As you know, she uses those categories to analyze the rhetorical structure of Mimesis. In the Milic example Fish in effect objects to Milic interpreting a statistic description of Swift’s prose to as evidence that he has “a fertile and well stocked mind.” Point taken. In a different essay in Is There a Text? he objects Steven Booth smuggling in his interpretations of Shakespeare sonnets by asserting that he’s merely describing them, as though description were self-evident. Again, point taken.

Now let’s look at Heuser and Le-Khac, A Quantitative Literary History of 2,958 Nineteenth-Century British Novels: The Semantic Cohort Method (2012). Much of the pamphlet is devoted to explain how they were able to make a series of observations of their corpus indicating a shift from abstract to concrete vocabulary. That’s a descriptive statement. For the purposes of this note I’m going to treat that process as a black box and take the description at face value.

What interests me is how they get from that descriptive statement to a (possible) explanation for it. Roughly speaking: 1) we have a shift of population from rural to urban, leading to 2) different patterns of social relations as a function of how much time one spends with intimates vs. strangers, and then 3) how this is expressed in fictional prose, as a shift from abstract to concrete language. That last step is the trickiest. But the whole process of getting from description to explanation is, in this case, a tricky one.

I note further that this is the piece that Alan Liu put at the center of his 2013 PMLA article, The Meaning of the Digital Humanities. As you know, Liu had been critiquing DH for a failure to get at meaning. It seems to me that he was, in effect, complaining that DH was delivering description, but no more.

And yet even description has can dig deep, no? It seems to me that the work you’ve done with Jordan Sellers on prestige in poetry is largely descriptive, but description with a bite. For one thing you fail to find the major stylistic shifts argued for in standard literary history. Moreover, there seems to be a century-long stylist “drift” in poetic space, indicating some kind of direction to the literary process. Explaining that, as you indicate, will be tricky.

And so it goes.

Yes. I think the boundary between “description” and “interpretation” is very, very blurry. Literary critics talk about that boundary in a way that made good sense for a 5-page essay on _The Turn of the Screw._ Plot summary — clearly, that’s just description. Are the ghosts figures for sexual repression? Okay, that’s interpretation.

That boundary works for one novel. I mean, you can problematize the boundary — Fish certainly would — but in practice, we’re still able to divide description from intepretation. But when you back up to several hundred or several thousand novels, it’s no longer clear at all what would count as “mere description.” Almost no observation about a thousand novels is direct or simple. I tend to think the description/interpretation boundary ceases to be very meaningful — or at least, it no longer means what we’re accustomed to mean by it. Instead, we should just talk about “models,” a word which appropriately posits that everything is more-or-less descriptive and more-or-less interpretive.

The predictive / explanatory boundary Tenen is emphasizing makes a bit more sense. It’s still blurry, like all boundaries, and in practice those two functions will be complementary. But at least it’s rooted in a real difference between two kinds of models.

So, in my original DNA example, Watson and Crick certainly had (some kind of ) a model of what happens when X-Rays pass through DNA salts. They used that model to infer the structure of the DNA. In Heuser and Le-Khac they had one kind of model that yielded a set of observations about vocabulary in their corpus. And they invoked another kind of model to explain that.

One problem I have with “interpretation” is that the word can be used to cover just about anything. Michael Bérubé makes this point in “There is Nothing Inside the Text, or, Why No One’s Heard of Wolfgang Iser”, Postmodern Sophistry: Stanley Fish and the Critical Enterprise, Gary A. Olson and Lynn Worsham, eds. (SUNY, 2004), pp. 11-26:

It would have been possible, in other words, to contest Fish’s reading of Iser not by stubbornly insisting on the determinacy of the determinate, and not, good Lord, by insisting on two separate varieties of determinacy and assigning “interpretation” to one of them, but by acknowledging that all forms of reading are interpretive but that some involve the kind of low-level, relatively uncontestable cognitive acts we engage in whenever we interpret the letter “e” as the letter “e,” and some involve the kind of high level, exceptionally specific and complex textual manipulations, transformations and reconfigurations involved whenever someone publishes something like S/Z – or Surprised by Sin. (And, of course, that there are any number of “interpretations” that fall between these extremes, and that the status of each of them is – what else? – both open to and dependent on interpretation.)

So what’s up with Fish’s interpretive communities? I’ve not read any of his stuff on that other than what’s in that volume, and that was some time ago? I vaguely remember something about an (absurd) Eskimo (or did he say Inuit) interpretation of “A Rose for Emily” and came away with the impression that it’s a rather idealized notion. It’s not as though, for example, there’s a psychoanalytic interpretation of Hamlet to which all psychoanalytic critics subscribe. You might object that psychoanalytic criticism is highly varied and not some one school at all. Which is sort of my point. Just what, in the real world, are these interpretive communities?

And then I wonder, is he talking only of the world of professional literary critics, who, along with their students, are pretty much the only ones who offer up these interpretations? I’d think we’re interested in what the literary audience makes of texts, even if they don’t make full-blown interpretations of them. I’ve been told that other scholars have taken the notion of interpretive communities in this direction and made something of it.

And then we have an interesting book published by Derek Attridge and Henry Staten, The Craft of Poetry: Dialogues on Minimal Interpretation (2015). For various reasons they decide to engage in “minimal interpretation” of a number of texts using a method of “dialogic poetics.” Minimal interpretation means that they decide to place all versions of critical theory off limits. They limit their vocabulary to nonspecialized educated discourse. As a result, their interpretations read a lot like New Criticism, except that they explicitly reject the assumptions of organic unity and textual autonomy. As for dialogic, that’s what they do, engage one another in dialog. They chose a dozen or so poems and engaged one another in, I assume, email conversation, which they then edited for the book, preserving the back-and-forth of dialog rather than writing it all in a single collective voice.

Their goal was to reach as much agreement as possible. Ideally, that would be complete agreement as, after all, they are each working off one and the same text. In fact they do manage to reach considerable agreement, which is not surprising. Nor is it surprising that they fail to reach complete agreement. They’re upfront about this and don’t make a big deal of it. All in all, it’s an interesting piece of work.

But I’m not sure what conclusions to draw from it. They talk about dialogic poetics as a method they propose, but I don’t see the profession deciding to pair up so that a considerable number of critics, if not all of us, engage in dialogic readings. Nor, I should, do they suggest that this dialogic poetics should supplant the various theory-informed readings. Rather their idea is that each text has some one minimal interpretation and then various theory-inflected interpretations.

I long ago decided that interpreting texts is not a well-formed process and so is not, under any reasonable circumstances, going to yield single all-encompassing interpretations. That’s just not possible. When you then take into account the different personalities, interests, and theoretical commitments of critics, multiple interpretations is what you get. But interpretive communities?

And yet, community is important. And I think that literary texts are important vehicles for creating that community. And that requires that they are a vehicle for sharing more than that ink splotches on pages. But how we get at that, well, I’ve got lots to say on that point, but not here. This has gone on too long as it is.

Hi there! Thanks for your insight. I’m looking to learn more about topic models and word embedding “from the start” (per say) to get a basis for a research position I might begin soon assisting a judicial linguistic analysis project (focusing on how the language of court decisions has evolved over time), but I have been running into a lot of literature which enters in the middle of ongoing discussions in the NLP field (such as your post). Any recommendations on a good book or article (textbook even?) that might be able to explain NLP theory and applications to an uninformed reader? Thanks.

Leave a comment