Categories
Uncategorized

Digital humanities and the spy business.

Flickr / dunechaser (Creative Commons)
I’m surprised more digital humanists haven’t blogged the news that the US Intelligence Advanced Projects Activity wants to fund techniques for mining and categorizing metaphors.

The stories I’ve read so far have largely missed the point of the program. They focus instead on the amusing notion that the government “fancies a huge metaphor repository.” And it’s true that the program description reads a bit like a section of English 101 taught by the men from Dragnet. “The Metaphor Program will exploit the fact that metaphors are pervasive in everyday talk and reveal the underlying beliefs and worldviews of members of a culture.” What is “culture,” you ask? Simply refer to section 1.A.3., “Program Definitions”: “Culture is a set of values, attitudes, knowledge and patterned behaviors shared by a group.”

This seems accurate enough, although the combination of precision and generality does feel a little freaky. “Affect is important because it influences behavior; metaphors have been associated with affect.”

The program announcement is similarly precise about the difference between metaphor and metonymy. (They’re not wild about metonymy.)

(3) Figurative Language: The only types of figurative language that are included in the program are metaphors and metonymy.
• Metonymy may be proposed in addition to but not instead of metaphor analysis. Those interested in metonymy must explain why metonymy is required, what metonymy adds to the analysis and how it complements the proposed work on metaphors.

All this is fun, but the program also has a purpose that hasn’t been highlighted by most of the reporting I’ve seen. The second phase of the program will use statistical analysis of metaphors to “characterize differing cultural perspectives associated with case studies of the types of interest to the Intelligence Community.” One can only speculate about those types, but I imagine that we’re talking about specific political movements and religious groups. The goal is ostensibly to understand their “cultural perspectives,” but it seems quite possible that an unspoken, longer-term goal might involve profiling and automatically identifying members of demographic, vocational, or political groups. (IARPA has inherited some personnel and structures once associated with John Poindexter’s Total Information Awareness program.) The initial phase of the metaphor-mining is going to focus on four languages: “American English, Iranian Farsi, Russian Russian and Mexican Spanish.”

Naturally, my feelings are complex. Automatically extracting metaphors from text would be a neat trick, especially if you also distinguished metaphor from metonymy. (You would have to know, for instance, that “Oval Office” is not a metaphor for the executive branch of the US government.) [UPDATE: Arno Bosse points out that Brad Pasanek has in fact been working on techniques for automatic metaphor extraction, and has developed a very extensive archive. Needless to say, I don’t mean to associate Brad with the IARPA project.]

Going from a list of metaphors to useful observations about a “cultural perspective” would be an even neater trick, and I doubt that it can be automated. My doubts on that score are the main source of my suspicion that the actual deliverable of the grant will turn out to be profiling. That may not be the intended goal. But I suspect it will be the deliverable because I suspect that it’s the part of the project researchers will get to work reliably. It probably is possible to identify members of specific groups through statistical analysis of the metaphors they use.

On the other hand, I don’t find this especially terrifying, because it has a Rube Goldberg indirection to it. If IARPA wants to automatically profile people based on digital analysis of their prose, they can do that in simpler ways. The success of stylometry indicates that you don’t need to understand the textual features that distinguish individuals (or groups) in order to make fairly reliable predictions about authorship. It may well turn out that people in a particular political movement overuse certain prepositions, for reasons that remain opaque, although the features are reliably predictive. I am confident, of course, that intelligence agencies would never apply a technique like this domestically.

Postscript: I should credit Anna Kornbluh for bringing this program to my attention.

Categories
Uncategorized

Why humanists need to understand text mining.

Humanists are already doing text mining; we’re just doing it in a theoretically naive way. Every time we search a database, we use complex statistical tools to sort important documents from unimportant ones. We don’t spend a lot of time talking about this part of our methodology, because search engines hide the underlying math, making the sorting process seem transparent.

But search is not a transparent technology: search engines make a wide range of different assumptions about similarity, relevance, and importance. If (as I’ve argued elsewhere) search engines’ claim to identify obscure but relevant sources has powerfully shaped contemporary historicism, then our critical practice has come to depend on algorithms that other people write for us, and that we don’t even realize we’re using. Humanists quite properly feel that humanistic research ought to be shaped by our own critical theories, not by the whims of Google. But that can only happen if we understand text mining well enough to build — or at least select — tools more appropriate for our discipline.

The AltaVista search page, circa 1996. This was the moment to freak out about text mining.

This isn’t an abstract problem; existing search technology sits uneasily with our critical theory in several concrete ways. For instance, humanists sometimes criticize text mining by noting that words and concepts don’t line up with each other in a one-to-one fashion. This is quite true: but it’s a critique of humanists’ existing search practices, not of embryonic efforts to improve them. Ordinary forms of keyword search are driven by individual words in a literal-minded way; the point of more sophisticated strategies — like topic modeling — is precisely that they pay attention to looser patterns of association in order to reflect the polysemous character of discourse, where concepts always have multiple names and words often mean several different things.

Perhaps more importantly, humanists have resigned themselves to a hermeneutically naive approach when they accept the dart-throwing game called “choosing search terms.” One of the basic premises of historicism is that other social forms are governed by categories that may not line up with our own; to understand another place or time, a scholar needs to begin by eliciting its own categories. Every time we use a search engine to do historical work we give the lie to this premise by assuming that we already know how experience is organized and labeled in, say, seventeenth-century Spain. That can be a time-consuming assumption, if our first few guesses turn out to be wrong and we have to keep throwing darts. But worse, it can be a misleading assumption, if we accept the first or second set of results and ignore concepts whose names we failed to guess. The point of more sophisticated text-mining techniques — like semantic clustering — is to allow patterns to emerge from historical collections in ways that are (if not absolutely spontaneous) at least a bit less slavishly and minutely dependent on the projection of contemporary assumptions.

I don’t want to suggest that we can dispense with search engines; when you already know what you’re looking for, and what it’s called, a naive search strategy may be the shortest path between A and B. But in the humanities you often don’t know precisely what you’re looking for yet, or what it’s called. And in those circumstances, our present search strategies are potentially misleading — although they remain powerful enough to be seductive. In short, I would suggest that humanists are choosing the wrong moment to get nervous about the distorting influence of digital methods. Crude statistical algorithms already shaped our critical practice in the 1990s when we started relying on keyword search; if we want to take back the reins, each humanist is going to need to understand text mining well enough to choose the tools appropriate for his or her own theoretical premises.