Five years ago it was easy to check on new digital subfields of the humanities. Just open Twitter. If a new blog post had dropped, or a magazine had published a fresh denunciation of “digital humanities,” academics would be buzzing.
In 2017, Stanley Fish and Leon Wieseltier are no longer attacking “DH” — and if they did, people might not care. Twitter, unfortunately, has bigger problems to worry about, because the Anglo-American political world has seen some changes for the worse.
But the world of digital humanities, I think, has seen changes for the better. It seems increasingly taken for granted that digital media and computational methods can play a role in the humanities. Perhaps a small role — and a controversial one — and one without much curricular support. But still!
In place of journalistic controversies and flame wars, we are finally getting a broad scholarly conversation about new ideas. Conversations of this kind take time to develop. Many of us will recall Twitter threads from 2013 anxiously wondering whether digital scholarship would ever have an impact on more “mainstream” disciplinary venues. The answer “it just takes time” wasn’t, in 2013, very convincing.
But in fact, it just took time. Quantitative methods and macroscopic evidence, for instance, are now a central subject of debate in literary studies. (Since flame wars may not be entirely over, I should acknowledge that I’m now moving to talk about one small subfield of DH rather than trying to do justice to the whole thing.)
The immediate occasion for this post is a special issue of Genre (v. 50, n. 1) engaging the theme of “data” in relation to the Victorian novel; this follows a special issue of Modern Language Quarterly on “scale and value.” Next year, “Scale” is the theme of the English Institute, and little birds tell me that PMLA is also organizing an issue on related themes. Meanwhile, of course, the new journal Cultural Analytics is providing an open-access home for essays that make computational methods central to their interpretive practice.
The participants in this conversation don’t all identify as digital humanists or distant readers. But they are generally open-minded scholars willing to engage ideas as ideas, whatever their disciplinary origin. Some are still deeply suspicious of numbers, but they are willing to consider both sides of that question. Many recent essays are refreshingly aware that quantitative analysis is itself a mode of interpretation, guided by explicit reflection on interpretive theory. Instead of reifying computation as a “tool” or “skill,” for instance, Robert Mitchell engages the intellectual history of Bayesian statistics in Genre.
Recent essays also seem aware that the history of large-scale quantitative approaches to the literary past didn’t begin and end with Franco Moretti. References to book history and the Annales School mix with citations of Tanya Clement and Andrew Piper. Although I admire Moretti’s work, this expansion of the conversation is welcome and overdue.
If “data” were a theme — like thing theory or the Anthropocene — this play might now have reached its happy ending. Getting literary scholars to talk about a theme is normally enough.
In fact, the play could proceed for several more acts, because “data” is shorthand for a range of interpretive practices that aren’t yet naturalized in the humanities. At most universities, grad students still can’t learn how to do distant reading. So there is no chance at all that distant reading will become the “next big thing” — one of those fashions that sweeps departments of English, changing everyone’s writing in a way that is soon taken for granted. We can stop worrying about that. Adding citations to Geertz and Foucault can be done in a month. But a method that requires years of retraining will never become the next big thing. Maybe, ten years from now, the fraction of humanities faculty who actually use quantitative methods may have risen to 5% — or optimistically, 7%. But even that change would be slow and deeply controversial.
So we might as well enjoy the current situation. The initial wave of utopian promises and enraged jeremiads about “DH” seems to have receded. Scholars have realized that new objects, and methods, of study are here to stay — and that they are in no danger of taking over. Now it’s just a matter of doing the work. That, also, takes time.
7 replies on “Digital humanities as a semi-normal thing”
FWIW, it seems to me that cognitive and Darwinian criticism are harder to work up than adding more citations to Foucault, Geertz, or for that matter, Deleuze or, more recently, Latour , but not so difficult as ‘distant reading’ (the scare quotes are my feeble protest against the idea that this is some form of reading). While people working in cognitive and Darwinian criticism certainly read the technical literature, what actually gets taken over into their criticism (whether practical or theoretical) is at the level of high-quality general audience exposition (the first half of Brian Boyd’s The Origin of Stories is an excellent example). And the practical criticism is like standard criticism but with a different vocabulary. They don’t actually have to learn a different way of reasoning about literary phenomenon.
Distant reading, of course, is quite different. It’s not simply that there’s the need to learn the computational tools. The more profound difference is learning to think about literary phenomena in a way that is amenable to using those tools, as Andrew Goldstone has recently pointed out. That’s the deep difference. Without that, learning to use the tools is pointless.
I quite agree. A lot of good work can be done “from a distance” that isn’t very digital. Most of Moretti’s essays from the period 2000-05 don’t rely heavily at all on computers, and one could point to yet earlier work by people like Janice Radway. The difficulty, as you say, is more about learning to frame different questions.
Which makes the larger rubric of *digital* humanities sort of awkwardly askew to distant reading — but that’s ground we’ve all been over before.
There’s also what you need to know in order appreciate distant reading and even reference it and build on it without actually doing it yourself. You don’t need to be able to use the software, obviously, but beyond that things vary from case to case. It seems to me, for example, that Moretti’s work on networks is pretty transparent. But topic modelling is not. What kind of background understanding do you need to have to feel comfortable with topic modelling?
Right. Fortunately, there are usually ways to translate the complex stuff. For instance, you might find a pattern using topic modeling, but then write an article where you say little (if anything) about the topic model, and trace the trend in other, simpler ways. But I’ll also confess that topic modeling is not what excites me right now; I’m leaning toward supervised models and simple descriptive statistics.
We are aiming for a combined biology (ecology) and digital humanities workshop this summer. A lot of our methods and tools overlap when you started to look at the data as a series of texts and word counts.
[…] of dialogue with history. More robust historical contextualization can, I believe, assist on all sides of the DH debates, mitigating both the millennial and apocalyptic rhetoric swirling around the […]