DH as a social phenomenon undigitized humanities

Why DH has no future.

Digital humanities is about eleven years old — counting from John Unsworth’s coinage of the phrase in 2001 — which perhaps explains why it has just discovered mortality and is anxiously contemplating its own.

Creative commons BY-NC-SA 1.0: bigadventures.

Steve Ramsay tried to head off this crisis by advising digital humanists that a healthy community “doesn’t concern itself at all with the idea that it will one day be supplanted by something else.” This was ethically wise, but about as effective as curing the hiccups by not thinking about elephants. Words like “supplant” have a way of sticking in your memory. Alex Reid then gave the discussion a twist by linking the future of DH to the uncertain future of the humanities themselves.

Meanwhile, I keep hearing friends speculate that the phrase “digital humanities” will soon become meaningless, since “everything will be digital,” and the adjective will be emptied out.

In thinking about these eschatological questions, I start from Matthew Kirschenbaum’s observation that DH is not a single intellectual project but a tactical coalition. Just for starters, humanists can be interested in digital technology a) as a way to transform scholarly communication, b) as an object of study, or c) as a means of analysis. These are distinct intellectual projects, although they happen to overlap socially right now because they all require skills and avocations that are not yet common among humanists.

This observation makes it pretty clear how “the digital humanities” will die. The project will fall apart as soon as it’s large enough for falling apart to be an option.

A) Transforming scholarly communication. This is one part of the project where I agree that “soon everyone will be a digital humanist.” The momentum of change here is clear, and there’s no reason why it shouldn’t be generalized to academia as a whole. As it does generalize, it will no longer be seen as DH.

B) Digital objects of study. It’s much less clear to me that all humanists are going to start thinking about the computational dimension of new cultural forms (videogames, recommendation algorithms, and so on). Here I would predict the classic sort of slow battle that literary modernism, for instance, had to wage in order to be accepted in the curriculum. The computational dimension of culture is going to become increasingly important, but it can’t simply displace the rest of cultural history, and not all humanists will want to acquire the algorithmic literacy required to critique it. So we could be looking at a permanent tension here, whether it ends up being a division within or between disciplines.

C) Digital means of analysis. The part of the project closest to my heart also has the murkiest future. If you forced me to speculate, I would guess that projects like text mining and digital history may remain somewhat marginal in departments of literature and history. I’m confident that we’ll build a few tools that get widely adopted by humanists; topic modeling, for instance, may become a standard way to explore large digital collections. But I’m not confident that the development of new analytical strategies will ever be seen as a central form of humanistic activity. The disciplinary loyalties of people in this subfield may also be complicated by the relatively richer funding opportunities in neighboring disciplines (like computer science).

So DH has no future, in the long run, because the three parts of DH probably confront very different kinds of future. One will be generalized; one will likely settle in for trench warfare; and one may well get absorbed by informatics. [Or become a permanent trade mission to informatics. See also Matthew Wilkens’ suggestion in the comments below. – Ed.] But personally, I’m in no rush to see any of this happen. My odds of finding a disciplinary home in the humanities will be highest as long as the DH coalition holds together; so here’s a toast to long life and happiness. We are, after all, only eleven.

[Update: In an earlier version of this post I dated Schreibman, Siemens, and Unsworth’s Companion to Digital Humanities to 2001, as Wikipedia does. But it appears that 2004 is the earliest publication date.]

[Update April 15th: I find that people are receiving this as a depressing post. But I truly didn’t mean it that way. I was trying to suggest that the projects currently grouped together as “DH” can transform the academy in a wide range of ways — ways that don’t even have to be confined to “the humanities.” So I’m predicting the death of DH only in an Obi-Wan Kenobi sense! I blame that picture of a drowned tombstone for making this seem darker than it is — a little too evocative …]

By tedunderwood

Ted Underwood is Professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. On Twitter he is @Ted_Underwood.

17 replies on “Why DH has no future.”

Well put, Ted. This mirrors my own sense of things. On the last of the three parts, my guess is that computational methods will end up being something like theory: Everyone coming up through the ranks will eventually be expected to know something about it and to make use of it (generally in simplified form) where appropriate, but only a handful of people will in the long run work to advance computational studies as such. Seems a fine outcome to me.

Yes, I agree. I phrased things a bit more darkly than that in the post itself, but actually I think the way you’re putting it is more accurate. It’s like the way everyone in literary studies is expected to know a bit about psychology (Freudian or cognitive), but not everyone specializes in it.

Perhaps it’s hard for us to see that as a stable solution because humanists are still tempted by a model of dialectical succession where each new approach has to universalize itself or/and be superseded. In the sciences, it seems to me, they handle this differently. If the relationship between biology and chemistry becomes an interesting problem, you just delegate some people called “biochemists” as a semi-permanent subfield to deal with the issue. That’s probably where computational criticism is headed: a semi-permanent subfield that deals with the interface between humanistic interpretation and informatics. But it’s still hard for humanists to imagine that kind of mediating role as a durable one.

At the moment I don’t have anything to add to the line of thought except to say that I agree with it. I suspect, moreover, that there will be a time when everyone coming up through the ranks will learn something of the newer psychologies, cognitive, evolutionary, and neuro. And, when we get folks who have both skills in computational data analysis and some knowledge of cognitive science, language and cultural evolution, and evolutionary game theory, a few of those will be profound theorists of “digital humanities.”

Do you think the momentum of change is clear in favor of transforming scholarly communication? I wonder if that’s a sense that comes from hanging out online. Among my own colleagues, I detect very little understanding of, much less enthusiasm for, anything but the usual.

Well, when I talk about “momentum” I don’t necessarily mean “a consensus of opinion among full professors.” It’s going to take a while for that to become clear! But I do think it’s already clear where new ideas are being exchanged and transformed … and I don’t think it’s happening mostly in little blocks of wood pulp with a two-year production cycle followed by a two-year review cycle. So I don’t have any doubt about the direction of change.

Adapting tenure and promotion standards so they reflect the new reality is no doubt going to be a long battle, in part because there are real issues to be hashed out. But, at least locally here at Illinois, it seems to me that senior faculty know which way the wind is blowing. That may not be true everywhere, and even here, “knowing which way the wind is blowing” doesn’t imply any determinate action to support said wind. But yes, I think it’s clear where this is headed.

Hi Ted. Thanks for the post — interesting as always.

Here are a few thoughts on this topic. It strikes me that when considering digital humanities and its future (either as a term of art or as a community of practice), it is useful to look at other hybrid disciplines. Last year I had a fascinating conversation with a colleague at NSF about bioinformatics. As we spoke, I was amazed at the many parallels between bioinformatics and digital humanities. It strikes me that the relationship between “bioinformatics –> biology” has some interesting parallels to “digital humanities –> humanities.” Obviously, bioinformatics isn’t my specialty, but I’ll try to summarize a few of the points he made:

— bioinformatics is a field that combines biology, biomedicine computer science, information science, and related disciplines.

— practitioners of bioinformatics come from different backgrounds: some are biologists who are interested in computing, some are computer scientists interested in biology, etc.

— many key practitioners are in non-tenure track jobs (#alt-ac). There is some tension in the field of bioinformatics because while these #alt-ac people are critical to the discipline, many work as staff in labs and find it difficult to get recognition, promotions, etc.

— promotion and tenure issues also come up for some TT faculty whose work is seen mainly as building infrastructure as opposed to original “new” research.

— the tools, techniques, and methods developed by bioinformatics practitioners may start off being cutting edge, but over time are adopted more widely by “traditional” biologists (whose work is becoming more and more digital over time).

— one key takeaway for me: even though traditional biologists now use technology in their research as an everyday thing, no one (to my knowledge) is saying that bioinformatics is “going away” or the term is “no longer needed.” Indeed, as the regular discipline becomes more digital, there continues to be a real need for the interdisciplinary expertise of bioinformatics to help develop the next generation of tools and methods.

If there are any bioinformatics folks out there, I’d love to hear your take on this, but I found this to be an interesting comparison to digital humanities. By the way, here’s a pretty good history of bioinformatics (lengthy, but quite readable):

Fascinating. The analogy to DH is very close, especially the analogy to the part I describe above as “DH as means of analysis.”

What’s especially interesting about this is the tension between “interdisciplinary project” and “support service.” Also, if we go by this analogy, the status of DH might remain a puzzling question for a good long time.

Bioinformatics is a classic case of a Hilfsdisziplin or ancillary discipline. It will become more rather than less important as time goes on, and its practitioners will become even more indispensable to the work of biologists than they already are. But it will remain an ancillary discipline. The “Digital Humanities” don’t like to think of themselves as ancillary to anything. They want to be the real thing. As I’ve said on different occasions, I don’t think there is or ought to be such a thing as DH. I could change my mind if DH thought of itself as the bundle of tools and methods that scholars in the humanities will need for this and that. The tools and methods will vary with the disciplines. But such a modest understanding of DH is the last thing in the minds of committed DHers. Many problems arise from this.

Hi Ted. Excellent post. More so for its succinctness and clarity. I wanted to add a category and one comment.

Maybe I’m off the mark, but I think digital preservation will migrate to departments in part and philology will see a renaissance. Several things follow from this: Libraries and departments will become closer to each other. We may see hybrid departments of discourse and practice crop up. These will be very different than the ones we grew up in. In that sense, digital approaches will become so intertwined with scholarly practice that everyone will be invested in contributing to methodological developments. As part of this naissance, we will see the very same critical discourse re-imagined through what you call “digital means of analysis.” The general will to justice and freedom of critique will not disappear, but the machinic within will precipitate into discourse. Hopefully, all these things combined will lead to equality between the tacit hermeneutics of building and critique’s ability to unsettle what we thought was transparent.

In the meantime, departments everywhere are having conversations with their deans to have a new hire. A bubble will blow, simply because there’s not that many of us right now. I’m not scared of this bubble in particular. I see it as the hook for a new generation of rising graduate students that will be given the time and incentive to pursue what was verboten until recently. And what dreams may come of that crop!

All that to say, I really don’t see an end in sight for the community we call DH, even if the barbarians are knocking at the door.

Hi Alex. I like the future you’re envisioning more than the one I envisioned. And I can see how it might happen. If DH helped the humanities embrace the concept of “practice” wholeheartedly, we could join forces with librarians and start teaching students an active kind of critical literacy that included a bit of hacktivism and a dash of statistics along with history, critical theory, and rhetoric. That’s probably what we need in the 21st century.

Five months ago I was more sanguine about that happening. I’m feeling skeptical now because I have a growing awareness that most humanists may not want the future we’re both envisioning. But I’m not pessimistic about the big picture; I’m just beginning to think that the energy you describe may end up located mostly in disciplinary interstices instead of being located “in” the humanities. Hope I’m wrong, though; I would rather live in your future!

To be fair to my colleagues, I have to add that the real question may be this: What do English majors want? Some of the stuff we’re talking about is hard. If English majors wanted to take informatics, they would. Maybe they prefer to sit in a circle, discussing Heart of Darkness. And the way our system is structured, there are strong incentives to give students what they prefer.

I’m putting this cynically, and of course the hope is that Conrad, historicism, and information science would fuse to create something much livelier than “a course in informatics.” But I just wanted to register that senior faculty aren’t the only brake on change; student preference is also a factor.

If English majors wanted to take informatics, they would. Maybe they prefer to sit in a circle, discussing Heart of Darkness.

But how much of an intellectual discipline can you build on top of sitting in a circle and discussing canonical texts?

In one form or another this is a question that’s been with us for awhile. In the introduction to his 1957 Anatomy of Criticism Northrup Frye argued in favor of systemic critical analysis against those who argued that talking about literature in such a way was somehow sacrilege. In the 1970s Geoffrey Hartman wrote an essay, The Fate of Reading, in which he lamented that the new ‘rithmatics (Hartman’s cover term for semiotics and structuralism) weren’t even reading in the standard sense. In effect, they were “distant” reading (though he didn’t use that term). In his view, the aim of critical reading, that is, critical writing, was to bring the critic deeper in touch with the text. Now Alan Liu wants to add some form of cultural critique to DH.

In a sense, one can see Hartman attempting to take what Frye had argued for and yoke it back to just plain old uspoken reading. And now Liu is calling for DH to add-on the forms of critique that grew up on the wake of structuralism and semiotics, those pesky new ‘rithmatics. Obviously, it’s very difficult to theorize what a computer is doing to a largish corpus of texts as a form of plain old reading. But why try? Why not accept it for what it is? Why not let the distance stay out there?

This is a very intriguing and though-provoking post, with some great follow-up comments to boot. You hit the nail squarely on the head in your response to Matthew at the top when you wrote “Perhaps it’s hard for us to see that as a stable solution because humanists are still tempted by a model of dialectical succession where each new approach has to universalize itself or/and be superseded.” As someone working with cognitive science and literary analysis, I frequently run into this mindset. I’ve often heard from senior colleagues that because cognitive science is a book and ever-expanding field, it’s impossible to extract a concrete, universal, permanent model for literary interpretation from it, and thus the entire field of cognitive literary analysis should be considered suspect. I’m increasingly frustrated by the seemingly inherent notion that scholarship should be written in stone, with no room for the evolution of methodology. If the “new” method cannot clearly demonstrate how it makes the “old” method completely obsolete, then it has no place in the humanities…and so forth.

I also appreciate your comment re: student preference being an important factory in DH work, particularly in digital pedagogy. The first time I introduced Twitter in a Shakespeare course a couple of years ago, the students balked. All they wanted to do, they explained, was show up two days a week to learn what was going on in the play, gush over some of the more flowery sections of verse, and be on their way. This is one of the things that has really hit me over the last couple of years — time and time again, students reveal that they still hold a very formal, very traditional view of what a “classroom” actually is, about not just how but WHERE education takes place. Although I believe that students are becoming more accustomed to digital expansions of the classroom, it seems to me that many of them need to be 1) convinced that such endeavors are not merely “gimmicks” and 2) the digital component of the classroom adds real value to their investment in the class. Once those concepts can be demonstrated, they’re much more eager to get on board.

A young digital humanist in whom I’ve taken an interest recommended I read Abbott’s The System of Professions a few months back, when I was musing aloud along these same lines regarding my own fields (agile software development and machine learning). If you haven’t done so, I’d strongly suggest you read it; not merely as participants in the local dynamic, but as academics (which I am not) who may not even be thinking of the larger credentialing-and-self-definition dynamic.

But this whole musing riff on self-definition takes me back to the 2006 Chicago DHCS conference, where I think I was the only non-academic non-library non-museum speaker. Only a few months earlier I’d spent a week at a much larger conference in a “completely” “unrelated” field—genetic programming. And there I’d watched the same cultural dynamics in action: the shift from defensive self-explanation at home to boundary-setting and partitioning of domain into subdomains and fiefdoms among colleagues; the establishment of pecking orders in various applications and contexts; the fights over cites.

The same conversations were starting to happen in DH already in 2006, even though by the clock DH was about five years “younger” than genetic programming.

I wish I’d had Abbott in my pocket in 2006… even though I suppose he was just down the street when I was in Chicago. The story makes a lot more sense now, having read his framing analysis.

So give it a look.


I’ll have to take a look at Abbott. In general I’m a big fan of sociological approaches to professional credentialing. Some of what you describe above may also just be the lovable primate behavior that’s been going on ever since we started grooming each other for ticks.