This is going to be a short, sweet, slightly-basic blog post, because I just have a simple thing to say.
I was originally trained as a scholar of eighteenth- and nineteenth-century British literature. As I learn more about other disciplines, I have been pleased to find that they are just as self-conscious and theoretically reflective as the one where I was trained. Every discipline has its own kind of theory.
But there is one thing that I still believe the humanities do better than any other part of the university: reflecting on historical change and on the historical mutability of the human mind. Lately social scientists (e.g. economic historians or physical anthropologists) can sometimes give us a run for our money. But humanists are more accustomed to the paradoxes that emerge when the rules of the game you’re playing can get historicized and contextualized, and change right under your feet. (We even have a word for it: “hermeneutics.”) So I think we still basically own the dimension of time.
At the moment, we aren’t celebrating that fact very much. Perhaps we’re still reeling from the late-20th-century discovery that the humanities’ connection to the past can be described as “cultural capital.” Ownership of the collective past is something people fight over, and the humanities had a central position in 19th- and 20th-century education partly because they had the social function of distributing that kind of authority.
Wariness about that social function is legitimate and necessary. However, I don’t think it can negate the basic fact that human beings are shaped and guided by a culture we inherit. In a very literal sense we can’t understand ourselves without understanding the past.
I don’t think we can afford to play down this link to the past. At a moment when the humanities feel threatened by technological change, it may be tempting to get rid of anything that looks dusty. Out with seventeenth-century books, in with social media and sublimely complex network diagrams. Instead of identifying with the human past, we increasingly justify our fields of study by talking about “humanistic values.” The argument implicit (and sometimes explicit) in that gesture is that the humanities are distinguished from other disciplines not by what we study, but by studying it in a more critical or ethical way.
Maybe that will work. Maybe the world will decide that it needs us because we are the only people preserving ethical reflection in an otherwise fallen age of technology. But I don’t know. There isn’t a lot of evidence that humanists are actually, on average, more ethical than other people. And even if there were good evidence, “I am more critical and ethical than ye” is the kind of claim that often proves a hard sell.
But that’s a big question, and the jury is out. And anyway the humanities don’t need more negativity at the moment. I mainly want to underline a positive point, which is that historical change is a big deal for hominids. Its importance isn’t declining. We are self-programmed creatures, and it is a basic matter of self-respect to try to understand how and when we got our instructions. Humanists are people who try to understand that story, and we should take pride in our connection to time.
One thing I’ve never understood about humanities disciplines is our insistence on staging methodology as ethical struggle. I don’t think humanists are uniquely guilty here; at bottom, it’s probably the institution of disciplinarity itself that does it. But the normative tone of methodological conversation is particularly odd in the humanities, because we have a reputation for embracing multiple perspectives. And yet, where research methods are concerned, we actually seem to find that very hard.
It never seems adequate to say “hey, look through the lens of this method for a sec — you might see something new.” Instead, critics practicing historicism feel compelled to justify their approach by showing that close reading is the crypto-theological preserve of literary mandarins. Arguments for close reading, in turn, feel compelled to claim that distant reading is a slippery slope to takeover by the social sciences — aka, a technocratic boot stomping on the individual face forever. Or, if we do admit that multiple perspectives have value, we often feel compelled to prescribe some particular balance between them.
Imagine if biologists and sociologists went at each other in the same way.
“It’s absurd to study individual bodies, when human beings are social animals!”
“Your obsession with large social phenomena is a slippery slope — if we listened to you, we would eventually forget about the amazing complexity of individual cells!”
“Both of your methods are regrettably limited. What we need, today, is research that constantly tempers its critique of institutions with close analysis of mitochondria.”
As soon as we back up and think about the relation between disciplines, it becomes obvious that there’s a spectrum of mutually complementary approaches, and different points on the spectrum (or different combinations of points) can be valid for different problems.
So why can’t we see this when we’re discussing the possible range of methods within a discipline? Why do we feel compelled to pretend that different approaches are locked in zero-sum struggle — or that there is a single correct way of balancing them — or that importing methods from one discipline to another raises a grave ethical quandary?
It’s true that disciplines are finite, and space in the major is limited. But a debate about “what will fit in the major” is not the same thing as ideology critique or civilizational struggle. It’s not even, necessarily, a substantive methodological debate that needs to be resolved.
Academics have been discussing a crisis “in” or “of” the humanities since the late 1980s. Scholars disagree about the nature of the crisis, but it’s a widely shared premise that one is located somewhere “in the humanities.”
Since the beginning of the twentieth century, when administrators at Columbia, Chicago, Yale, and Harvard began to speak fervently of the moral and spiritual benefits of a university education, “the humanities” has served as the name and the form of the link Arnold envisioned between culture, education, and the state. Particularly after World War II, the humanities began to be opposed not just to its traditional foil, science, but also to social science, whose emergence as a powerful force in the American academy was marked by the founding of the Center for Advanced Study in the Behavioral Sciences at Stanford in 1951 (87).
In research for a forthcoming book (Why Literary Periods Mattered, Stanford UP) I’ve poked around a bit in the institutional history of the early-twentieth-century university, and Harpham’s thesis rings true to me. Although the word has a pre-twentieth-century history, our present understanding of “the humanities” is strongly shaped by an institutional opposition between humanities and social sciences that only made sense in the twentieth century. For whatever it’s worth, Google Books also tends to support Harpham’s contention that the concept of the humanities has only possessed its present prominence since WWII.
Defenses of “culture,” of course, are older. But it hasn’t always been clear that culture was coextensive with the disciplines now grouped together as humanistic. In the middle of the twentieth century, literary critics like René Wellek fervently defended literary culture from philistine encroachment by the discipline of history. The notion that literary scholars and historians must declare common cause against a besieging world of philistines is a very different script, and one that really only emerged in the last thirty years.
Why do I say all this? Am I trying to divide literary scholars from historians? Don’t I see that we have to hang together, or hang separately?
I understand that higher education, as a whole, is under attack from the right. So I’m happy to declare common cause with people who are working to articulate the value of literary studies and history — or for that matter, anthropology and library science. But I don’t think it’s quite inevitable that these battles should be fought under the flag of the humanities.
Or one could argue that we’d be better off fighting for specific concepts like “literature” and “history” and “art.” People outside the university know what those are. It’s not clear that they have a vivid concept of the humanities. It’s a term of recent and mostly academic provenance.
On the other hand, there may be good reason to mobilize around “the humanities.” Certainly the NEH itself is worth defending. Ultimately, this is a question of political strategy, and I don’t have strong opinions about it. I’m very happy to see people defending individual disciplines, or the humanities, or higher education as a whole. In my eyes, it’s all good.
But I do want to push back gently against the notion that scholars in any discipline have a political obligation to organize under the banner of “the humanities,” or an intellectual obligation to define “humanistic” methods. The concept of the humanities may well be a recent invention, shaped by twentieth-century struggles over institutional turf. We talk about “humanistic values” as if they were immemorial. But Erasmus did not share our sense that history and literature have to band together in order to resist encroachment by sociology.
More pointedly: cultural criticism and humanities advocacy are fundamentally different things. There have been many kinds of critical, politically engaged intellectuals; only in the last sixty years have some of them self-identified as humanists.
Added a few hours after posting: To show a few more of my own cards, I’ll confess that what I love most about DH is the freedom to ignore disciplinary boundaries and follow shared problems wherever they lead. But I’m beginning to suspect that the concept of the humanities may itself discourage interdisciplinary risks. It seems to have been invented (rather recently) to define certain disciplines through their collective difference from the social and natural sciences. If that’s true, “digital humanities” may be an awkward concept for me. I’m a literary historian, and I do feel loyalty to the methods of that discipline. But I don’t feel loyalty to them specifically as different from the sciences.
Added a day after initial posting: And, to be clear, I don’t mean that we need a better name than “digital humanities.” There’s a basic tension between interdisciplinarity and field definition — so any name can become constricting if you spend too much time defining it. For me the bottom line is this: I like the interdisciplinary energy that I’ve found in the DH blogosphere and don’t care what we call it — don’t care, in a radical way — to the extent that I don’t even care whether critics think DH is consonant with, quote, “humanistic values.” Because in truth, some of those values are recent inventions, shaped by pressure to differentiate the humanities from the social sciences — and that move deserves to be questioned every bit as much as DH itself does. /done now
Harpham, Geoffrey Galt. The Humanities and the Dream of America. Chicago: University of Chicago Press, 2011. (I should note that I may not agree with all aspects of Harpham’s argument. In particular, I’m not yet persuaded that the concept of ‘the humanities’ is as fully identified with the United States in particular as he argues.)
Liu, Alan. “Where is Cultural Criticism in the Digital Humanities.” Debates in the Digital Humanities. Ed. Matthew K. Gold. (Minnesota: University of Minnesota Press, 2012). 490-509.
Spiro, Lisa. “‘This is Why We Fight’: Defining the Values of the Digital Humanities.” Debates in the Digital Humanities. Ed. Matthew K. Gold. (Minnesota: University of Minnesota Press, 2012). 16-35.
When I saw the meme to the right come across my facebook newsfeed — and then get widely shared! — I realized that the field of digital humanities is confronting a PR crisis. In literary studies, a lot of job postings are suddenly requesting interest or experience in DH. This requirement was not advertised when people began their dissertations, and candidates are understandably ticked off by the late-breaking news.
I know where they’re coming from, since I’ve spent much of the past twenty years having to pretend that my work was relevant to a wide variety of theoretical questions I wasn’t all that passionate about. Especially in job interviews. Did my work engage de Man’s well-known essays on the topic? “Bien sûr.” Had I considered postcolonial angles? “Of course. It would be unethical not to.” And so on. There’s nothing scandalous about this sort of pretense. Not every theme can be central to every project, but it’s still fair to ask people how their projects might engage a range of contemporary debates.
The problem we’re confronting now in DH is that people don’t feel free to claim a passing acquaintance with our field. If they’re asked about Marxist theory, they can bullshit by saying “Althusser, Williams, blah blah blah.” But if they’re asked about DH, they feel they have to say “no, I really don’t do DH.” Which sounds bracingly straightforward. Except, in my opinion, bracingly straightforward is bad for everyone’s health. It locks deserving candidates out of jobs they might end up excelling in, and conversely, locks DH itself out of the mainstream of departmental conversation.
I want to give grad students permission to intelligently bullshit their way through questions about DH just as they would any other question. For certain jobs — to be sure — that’s not going to fly. At Nebraska or Maryland or George Mason or McGill, they may want someone who can reverse the polarity on the Drupal generator, and a general acquaintance with DH discourse won’t be enough. But at many other institutions (including, cough, many elite ones) they’re just getting their toes wet, and may merely be looking for someone informed about the field and interested in learning more about it. In that case “intelligent, informed BS” is basically what’s desired.
What makes this tricky is that DH — unlike some other theoretical movements — does have a strong practical dimension. And that tends to harden boundaries. It makes grad students (and senior faculty) feel that no amount of information about DH will ever be useful to them. “If I don’t have time to build a web page from scratch, I’m never going to count as a digital humanist, so why should I go to reading groups or surf blogs?”
Naturally, I do want to encourage people to pick up some technical skills. They’re fun. But I think it’s also really important for the health of the field that DH should develop the same sort of penumbra of affiliation that every other scholarly movement has developed. It needs to be possible to intelligently shoot the breeze about DH even if you don’t “do” it.
There are a lot of ways to develop that kind of familiarity, from reading Matt Gold’s Debates in Digital Humanities, to surfing blogs, to blogging for yourself, to Lisa Spiro’s list of starting places in DH, to following people on Twitter, to thinking about digital pedagogy with NITLE, to affiliation with groups like HASTAC or NINES or 18th Connect. (Please add more suggestions in comments!) Those of us who are working on digital research projects should make it a priority to draw in local collaborators and/or research assistants. Even if grad students don’t have time to develop their own digital research project from the ground up, they can acquire some familiarity with the field. Finally, in my book, informed critique of DH also counts as a way of “doing DH.” When interviewers ask you whether you do DH, the answer can be “yes, and I’m specifically concerned about the field’s failure to address X.”
Bottom line: grad students shouldn’t feel that they’re being asked to assume a position as “digital” or “analog” humanists, any more than they’re being asked to declare themselves “for” or “against” close reading and feminism. DH is not an identity category; it’s a project that your work might engage, indirectly, in a variety of ways.
Though my assessment of print scholarship is not as dark as Alex’s, I do share a bit of his puzzlement. To me, the concept of “open review” sometimes feels like an attempt to fit a round peg in a square hole.
I’m completely convinced about the value of the open intellectual exchange that happens on academic blogs. I’m constantly learning from other people’s blogs, and from their comments on mine. I’ve been warned away from dead ends, my methodology has improved, I’ve learned about sources I would otherwise have overlooked. It’s everything that’s supposed to happen at a conference — but rarely does. And you don’t have to pay for a plane ticket.
This kind of exchange is “open,” and it has intellectual value. On the other hand, I have no desire to claim that it constitutes a “review” process. It’s better than review: it’s learning. I don’t feel that I need to get credit for it on my vita, because the stuff I learn is going to produce articles … which I can then list on my vita.
As far as those articles are concerned, I’m more or less happy with existing review structures. I don’t (usually) learn as much from the formal review process as I do from blogs, but I’m okay with that: I can live with the fact that “review” is about selection and validation rather than open dialogue. (Also, note “usually” above: there are exceptions, when I get a really good reader/editor.)
To say the same thing more briefly: I think the Journal of Digital Humanities has the model about right. Articles in JDH tend to begin life as blog posts. They tend to get kicked around pretty vigorously by commenters: that’s the “open” part of the process, where most of the constructive criticism, learning, and improvement take place. Then they’re selected by the editors of JDH, which to my mind is “review.” The editors may not have to give detailed suggestions for revision, because the give-and-take of the blog stage has probably already shown the author where she wants to expand or rethink. The two stages (“open” and “review”) are loosely related, but not fused. As I understand the process, selection is partly (but only partly) driven by the amount of discussion a post stirred up.
If you ask, why not fuse the the two stages? I would say, because they’re doing different sorts of work. I think open intellectual exchange is most fun when it feels like a reciprocal exchange of views rather than a process where “I’m asking you to review my work.” So I’d rather not force it to count as a review process. Conversely, I suspect there are good reasons for the editorial selection process to be less than perfectly open. Majorities should rule in politics, but perhaps not always in academic debate.
But if people want to keep trying to fuse the “open” part with the “review” part, I’ve got no objection. It’s worth a try, and trying does no harm.
Digital humanities is about eleven years old — counting from John Unsworth’s coinage of the phrase in 2001 — which perhaps explains why it has just discovered mortality and is anxiously contemplating its own.
Steve Ramsay tried to head off this crisis by advising digital humanists that a healthy community “doesn’t concern itself at all with the idea that it will one day be supplanted by something else.” This was ethically wise, but about as effective as curing the hiccups by not thinking about elephants. Words like “supplant” have a way of sticking in your memory. Alex Reid then gave the discussion a twist by linking the future of DH to the uncertain future of the humanities themselves.
Meanwhile, I keep hearing friends speculate that the phrase “digital humanities” will soon become meaningless, since “everything will be digital,” and the adjective will be emptied out.
In thinking about these eschatological questions, I start from Matthew Kirschenbaum’s observation that DH is not a single intellectual project but a tactical coalition. Just for starters, humanists can be interested in digital technology a) as a way to transform scholarly communication, b) as an object of study, or c) as a means of analysis. These are distinct intellectual projects, although they happen to overlap socially right now because they all require skills and avocations that are not yet common among humanists.
This observation makes it pretty clear how “the digital humanities” will die. The project will fall apart as soon as it’s large enough for falling apart to be an option.
A) Transforming scholarly communication. This is one part of the project where I agree that “soon everyone will be a digital humanist.” The momentum of change here is clear, and there’s no reason why it shouldn’t be generalized to academia as a whole. As it does generalize, it will no longer be seen as DH.
B) Digital objects of study. It’s much less clear to me that all humanists are going to start thinking about the computational dimension of new cultural forms (videogames, recommendation algorithms, and so on). Here I would predict the classic sort of slow battle that literary modernism, for instance, had to wage in order to be accepted in the curriculum. The computational dimension of culture is going to become increasingly important, but it can’t simply displace the rest of cultural history, and not all humanists will want to acquire the algorithmic literacy required to critique it. So we could be looking at a permanent tension here, whether it ends up being a division within or between disciplines.
C) Digital means of analysis. The part of the project closest to my heart also has the murkiest future. If you forced me to speculate, I would guess that projects like text mining and digital history may remain somewhat marginal in departments of literature and history. I’m confident that we’ll build a few tools that get widely adopted by humanists; topic modeling, for instance, may become a standard way to explore large digital collections. But I’m not confident that the development of new analytical strategies will ever be seen as a central form of humanistic activity. The disciplinary loyalties of people in this subfield may also be complicated by the relatively richer funding opportunities in neighboring disciplines (like computer science).
So DH has no future, in the long run, because the three parts of DH probably confront very different kinds of future. One will be generalized; one will likely settle in for trench warfare; and one may well get absorbed by informatics. [Or become a permanent trade mission to informatics. See also Matthew Wilkens’ suggestion in the comments below. – Ed.] But personally, I’m in no rush to see any of this happen. My odds of finding a disciplinary home in the humanities will be highest as long as the DH coalition holds together; so here’s a toast to long life and happiness. We are, after all, only eleven.
[Update April 15th: I find that people are receiving this as a depressing post. But I truly didn’t mean it that way. I was trying to suggest that the projects currently grouped together as “DH” can transform the academy in a wide range of ways — ways that don’t even have to be confined to “the humanities.” So I’m predicting the death of DH only in an Obi-Wan Kenobi sense! I blame that picture of a drowned tombstone for making this seem darker than it is — a little too evocative …]
In responding to Stanley Fish last week, I tried to acknowledge that the “digital humanities,” in spite of their name, are not centrally about numbers. The movement is very broad, and at the broadest level, it probably has more to do with networked communication than it does with quantitative analysis.
The older tradition of “humanities computing” — which was about numbers — has been absorbed into this larger movement. But it’s definitely the part of DH that humanists are least comfortable with, and it often has to apologize for itself. So, for instance, I’ve spent much of the last year reminding humanists that they’re already using quantitative text mining in the form of search engines — so it can’t be that scary.* Kathleen Fitzpatrick recently wrote a post suggesting that “one key role for a ‘worldly’ digital humanities may well be helping to break contemporary US culture of its unthinking association of numbers with verifiable reality….” Stephen Ramsay’s Reading Machines manages to call for an “algorithmic criticism” while at the same time suggesting that humanists will use numbers in ways that are altogether different from the way scientists use them (or at least different from “scientism,” an admittedly ambiguous term).
I think all three of us (Stephen, Kathleen, and myself) are making strategically necessary moves. Because if you tell humanists that we do (also) need to use numbers the way scientists use them, your colleagues are going to mutter about naïve quests for certainty, shake their heads, and stop listening. So digital humanists are rhetorically required to construct positivist scapegoats who get hypothetically chased from our villages before we can tell people about the exciting new kinds of analysis that are becoming possible. And, to be clear, I think the people I’ve cited (including me) are doing that in fair and responsible ways.
However, I’m in an “eppur si muove” mood this morning, so I’m going to forget strategy for a second and call things the way I see them. <Begin Galilean outburst>
In reality, scientists are not naïve about the relationship between numbers and certainty, because they spend a lot of time thinking about statistics. Statistics is the science of uncertainty, and it insists — as forcefully as any literary theorist could — that every claim comes accompanied by a specific kind of ignorance. Once you accept that, you can stop looking for absolute knowledge, and instead reason concretely about your own relative uncertainty in a given instance. I think humanists’ unfamiliarity with this idea may explain why our critiques of data mining so often taken the form of pointing to a small error buried somewhere in the data: unfamiliarity with statistics forces us to fall back on a black-and-white model of truth, where the introduction of any uncertainty vitiates everything.
Moreover, the branch of statistics most relevant to text mining (Bayesian inference) is amazingly, almost bizarrely willing to incorporate subjective belief into its definition of knowledge. It insists that definitions of probability have to depend not only on observed evidence, but on the “prior probabilities” that we expected before we saw the evidence. If humanists were more familiar with Bayesian statistics, I think it would blow a lot of minds.
I know the line about “lies, damn lies, and so on,” and it’s certainly true that statistics can be abused, as this classic xkcd comic shows. But everything can be abused. The remedy for bad verbal argument is not to “remember that speech should stay in its proper sphere” — it’s to speak better and more critically. Similarly, the remedy for bad quantitative argument is not “remember that numbers have to stay in their proper sphere”; it’s to learn statistics and reason more critically.
None of this is to say that we can simply borrow tools or methods from scientists unchanged. The humanities have a lot to add — especially when it comes to the social and historical character of human behavior. I think there are fascinating advances taking place in data science right now. But when you take apart the analytic tools that computer scientists have designed, you often find that they’re based on specific mistaken assumptions about the social character of language. For instance, there’s a method called “Topics over Time” that I want to use to identify trends in the written record (Wang and McCallum, 2006). The people who designed it have done really impressive work. But if a humanist takes apart the algorithm underlying this method, they will find that it assumes that every trend can be characterized as a smooth curve called a “Beta distribution.” Whereas in fact, humanists have evidence that the historical trajectory of a topic is often more complex than that, in ways that really matter. So before I can use this tool, I’m going to have to fix that part of the method.
But this is a problem that can be fixed, in large part, by fixing the numbers. Humanists have a real contribution to make to the science of data mining, but it’s a contribution that can be embodied in specific analytic insights: it’s not just to hover over the field like the ghost of Ben Kenobi and warn it about hubris.
For related thoughts, somewhat more temperate than the outburst above, see this excellent comment by Matthew Wilkens, responding to a critique of his work by Jeremy Rosen.
* I credit Ben Schmidt for this insight so often that regular readers are probably bored. But for the record: it comes from him.
Fish seems less suspicious of computing these days, and he understands the current contours of digital humanities well. As he implies, DH is not a specific method or theory, but something more like a social movement that extends messily from “the refining of search engines” to “the rethinking of peer review.”
In short, Fish’s column is kind enough. But I want to warn digital humanists about the implications of his flattery. Literary scholars are addicted to a specific kind of methodological conflict. Fish is offering an invitation to consider ourselves worthy of joining the fight. Let’s not.
The outlines of the debate I have in mind emerge at the end of this column as Fish sets up his next one. It turns out that the discipline of literary studies is in trouble! Maybe enrollments are down, or literary culture is in peril; as Fish himself hints, this script is so familiar that we hardly need to spell out the threat. Anyway, the digital humanities have implicitly promised that their new version of the discipline will ensure “the health and survival of the profession.” But can they really do so? Tune in next week …
Or don’t. As flattering as it is to be cast in this drama, digital humanists would be better advised to bow out. The disciplinary struggle that Fish wants to stage around us is not our fight, and was perhaps never a very productive fight anyway.
In explaining why I feel this way, I’m going to try to address both colleagues who “do” DH and those who are apprehensive about it. I think it’s fair to be apprehensive, but the apprehension I’m hearing these days (from Fish and from my own friends) seems to me too narrowly targeted. DH is not the kind of trend humanists are used to, which starts with a specific methodological insight and promises to revive a discipline (or two) by generalizing that insight. It’s something more diffuse, and the diffuseness matters.
1. Why isn’t digital humanities yet another answer to the question “How should we save literary studies?” First of all, because digital humanities is not a movement within literary studies. It includes historians and linguists, computer scientists and librarians.
“Interdisciplinary?” Maybe, but extra-disciplinary might be a better word, because DH is not even restricted to the ranks of faculty. When I say “librarians,” I mean not only faculty in library schools, but people with professional appointments in libraries. Academic professionals have often been the leading figures in this field.
So DH is really not another movement to revitalize literary studies by making it relevant to [X]. There are people who would like to cast it in those terms. Doing so would make it possible to stage a familiar sort of specifically disciplinary debate. It would also, incidentally, allow the energy of the field to be repossessed by faculty, who have historically been in charge of theoretical debate, but not quite so securely in charge of (say) collaborations to build new infrastructure. [I owe this observation to Twitter conversation with Bethany Nowviskie and Miriam Posner.]
But reframing digital humanities in that way would obscure what’s actually interesting and new about this moment — new opportunities for collaboration both across disciplines and across the boundary between the conceptual work of academia and the infrastructure that supports and tacitly shapes it.
2) That sounds very nice, but isn’t there still an implicit disciplinary argument — and isn’t that the part of this that matters?
I understand the suspicion. In literary studies, change has almost always taken place through a normative claim about the proper boundaries of the discipline. Always historicize! Or on second thought no, don’t historicize, but instead revive literary culture by returning to our core competence of close reading!
But in my experience digital humanists are really not interested in regulating disciplinary boundaries — except insofar as they want a seat at the table. “Isn’t DH about turning the humanities into distant reading and cliometrics and so on?” I understand the suspicion, but no. I personally happen to be enthusiastic about distant reading, but DH is more diverse than that. Digital humanists approach interpretation in a lot of different ways, at different scales. Some people focus tightly on exploration of a single work. “But isn’t it in any case about displacing interpretation with a claim to empirical truth?” Absolutely not. Here I can fortunately recommend Stephen Ramsay’s recent book Reading Machines, which understands algorithms as ways of systematically deforming a text in order to enhance interpretive play. Ramsay is quite eloquent about the dangers of “scientism.”
The fundamental mistake here may be the assumption that quantitative methods are a new thing in the humanities, and therefore must imply some new and terrifyingly normative positivism. They aren’t new. All of us have been using quantitative tools for several decades — and using them to achieve a wide variety of theoretical ends. The only thing that’s new in the last few years is that humanists are consciously taking charge of the tools ourselves. But I’ve said a lot about that in the past, so I’ll just link to my previous discussion.
3. Well, shouldn’t DH be promising to save literary studies, or the humanities as a whole? Isn’t it irresponsible to ignore the present crisis in academia?
Digital humanists haven’t ignored the social problems of academia; on the contrary, as Fish acknowledges, they’re engaging those problems at multiple levels. Rethinking peer review and scholarly publishing, for instance. Or addressing the tattered moral logic of graduate education by trying to open alternate career paths for humanists. Whatever it means to “do digital humanities,” it has to imply thinking about academia as a social institution.
But it doesn’t have to imply the mode of social engagement that humanists have often favored — which is to make normative claims about the boundaries of our own disciplines, with the notion that in doing so we are defending some larger ideal. That’s not a part of the job we should feel guilty about skipping.
4. Haven’t you defined “digital humanities” so broadly that it’s impossible to make a coherent argument for or against it?
I have, and that might be a good thing. I sometimes call DH a “field” because I lack a better word, but digital humanities is not a discipline or a coherent project. It’s a rubric under which a bunch of different projects have gathered — from new media studies to text mining to the open-access movement — linked mainly by the fact that they are responding to related kinds of fluidity: rapid changes in representation, communication, and analysis that open up detours around some familiar institutions.
It’s hard to be “for” or “against” a set of developments like this — just as it was hard to be for or against all types of “theory” at the same time. Of course, the emptiness of a generally pro- or anti-theory position never stopped us! Literary scholars are going to want to take a position on DH, as if it were a familiar sort of polemical project. But I think DH is something more interesting than that — intellectually less coherent, but posing a more genuine challenge to our assumptions.
I suppose, if pressed, I would say “digital humanities” is the name of an opportunity. Technological change has made some of the embodiments of humanistic work — media, archives, institutions, perhaps curricula — a lot more plastic than they used to be. That could turn out to be a good thing or a bad thing. But it’s neither of those just yet: the meaning of the opportunity is going to depend on what we make of it.
There are already several great posts out there that exhaustively list resources and starting points for people getting into DH (a lot of them are by Lisa Spiro, who is good at it).
This will be a shorter list. I’m still new enough at this to remember what surprised me in the early going, and there were two areas where my previous experience in the academy failed to prepare me for the fluid nature of this field.
1) I had no idea, going into this, just how active a scholarly field could be online. Things are changing rapidly — copyright lawsuits, new tools, new ideas. To find out what’s happening, I think it’s actually vital to lurk on Twitter. Before I got on Twitter, I was flying blind, and didn’t even realize it. Start by following Brett Bobley, head of the Office of Digital Humanities at the NEH. Then follow everyone else.
2) The technical aspect of the field is important — too important, in many cases, to be delegated. You need to get your hands dirty. But the technical aspect is also much less of an obstacle than I originally assumed. There’s an amazing amount of information on the web, and you can teach yourself to do almost anything in a couple of weekends.* Realizing that you can is half the battle. For a pep talk / inspiring example, try this great narrative by Tim Sherratt.
That’s it. If you want more information, see the links to Lisa Spiro and DiRT at the top of this post. Lisa is right, by the way, that the place to start is with a particular problem you want to solve. Don’t dutifully acquire skills that you think you’re supposed to have for later use. Just go solve that problem!
* ps: Technical obstacles are minor even if you want to work with “big data.” We’re at a point now where you can harvest your own big data — big, at least, by humanistic standards. Hardware limitations are not quite irrelevant, but you won’t hit them for the first year or so, though you may listen anxiously while that drive grinds much more than you’re used to …
I’m a relative newcomer to digital humanities; I’ve been doing this for about a year now. The content of the field has been interesting, but in some ways even more interesting is the way it has transformed my perception of the academy as a social structure. There are clearly going to be debates over the next few years between more and less digitized humanists, and debate is probably a good thing for everyone. But the debate can be much more illuminating if we acknowledge up front that it’s also a tension between two different forms of social organization.
Here’s what happens when that dimension of the issue goes unacknowledged: a tenured or tenure-track faculty member will give a talk or write a blog post about the digital humanities, saying essentially “you’ve got some great tools there, but before they can really matter, their social implications need to be theorized more self-consciously.” Said professor is then surprised when the librarians, or academic professionals, or grad students, who have in many cases designed and built those tools reply with a wry look.
I hasten to add that I’ve got nothing against theories. I wouldn’t mind constructing a few myself. Literary theory, social theory, statistical theory — they’re all fun. But when the word “Theory” is used without adjective or explication, it does in my view deserve a wry look. When you take away all the adjectives, what’s left is essentially a status marker.
So let’s not play that game. Nothing “needs to be theorized” in a vague transitive way; academics who use phrases like that need to realize what they’re saying. DH is an intensely interdisciplinary field that already juggles several different kinds of theory, and actively reflects on the social significance of its endeavors (e.g. in transforming scholarly communication). It is also, among other things, an insurgent challenge to academic hierarchy, organized and led by people who often hold staff positions — which means that the nature of the boundary between practice and theory is precisely one of the questions it seeks to contest.
But as long as everyone understands that “theory” is not a determinate object belonging to a particular team, then I say, the more critique, debate, and intellectual exchange the better. For instance, I quite enjoyed Natalia Cecire’s recent blog post on ways DH could frame its engagement with literary theory more ambitiously. I don’t know whether it’s a good idea to have a “theory THATcamp”; I haven’t been to THATcamp, and don’t know whether its strengths (which seem to lie in collaboration) are compatible with that much yacking. But I think Cecire is absolutely right to insist that DH can and should change the way the humanities are practiced. Because digital approaches make it possible to ask and answer different kinds of questions, there’s going to be a reciprocal interaction between humanistic goals and digital methods, not, as Cecire puts it, a “merely paratactic, additive concatenation.” We’re going to need to theorize about methods and goals at the same time. Together. Intransitively.
[Sun, Oct 23, 2011 — This post is slightly revised from the original version, mostly for clarity.]