Categories
methodology ngrams

On the imperfection of the Google dataset, and imperfection in general

The dataset that Google made public last week isn’t perfect. As Natalie Binder among others has pointed out, the dataset contains many OCR (optical character recognition) errors, and at least a few errors in dating. (UPDATE 12/22: It is worth noting, however, that the dataset will have many fewer errors than Google Books itself, because the dataset is based on a subset of volumes with relatively clean OCR.)

Moreover, as Dennis Baron argues in The Web of Language, “books don’t always reflect the spoken language accurately.” Informal words like “hello” are likely to be underrepresented in books.

The utility of the dataset is even more importantly reduced by Google’s decision to strip out all information about context of original occurrence, as Mark Liberman has noted. If researchers had unfettered access to the full text of original works, we could draw much more interesting conclusions about context, genre, and authorship.

Finally, I would add that — even with the present structure of the dataset — it’s possible to imagine search strategies other than simply graphing the frequencies of individual words and phrases, one by one. The ngram viewer is an elegant interface, but a limited one.

All true. But the Google dataset is also turning out to be tremendously useful, and it’s likely to become even more useful as researchers refine it and develop more flexible ways to query it.

Of course, it has to be used appropriately. This is not a tool you should use if you want to know exactly how often Laurence Sterne referred to noses. It’s a tool for statistical questions about the written language that involve very large numbers of examples. When it’s applied to questions on that scale, the OCR errors in the English corpus (after 1820) are not significant enough to prevent the ngram viewer from producing useful results. Before 1820 there are more significant OCR problems, especially with the substitution of f for “long s.” But even there, I don’t see the problem as insuperable; there are straightforward ways for researchers to compensate for the most predictable OCR errors.

The larger critique being leveled at the ngram viewer, by Natalie Binder and many other humanists, is that it’s impossible to know what an individual graph measures. Complex words have multiple meanings, Binder reminds us, so how should we interpret a graph showing a decline in the frequency of “nature”? How should we interpret a correlation between the increasing frequency of “vampire” and the declining frequency of “dilettante”?

The saying that correlation doesn’t prove causation definitely needs to be underlined in this domain. There are so many words in the language that a huge number of them will always correlate in largely accidental ways. More generally, it’s true that, in most cases, a graph of word frequency will not by itself tell us very much. You have to have some cultural context before the increasing frequency of “vampire” in the late twentieth century is going to mean anything at all to you. But of course, this is true of all historical evidence: no single poem or novel, in isolation, can tell us what was happening culturally around 1800. You need to compare different texts and authors from different social groups; it may be helpful to know that there was a revolution in France, and so on.

What puzzles me about humanistic disdain for the ngram viewer is that it often seems to presume that a piece of evidence must be legible in itself — naked and free of all context — in order to have any significance at all. If a graph doesn’t have a single determinate meaning, read from its face as easily as the value of a coin, then what is it good for? This critique seems to take hyper-positivism as a premise in order to refute a rather mild and contextual empiricism.

In short, the evidence produced by Google’s new tool is imperfect. It will have to be interpreted sensitively, by people who understand how it was produced. And it will need to be supplemented by multiple kinds of context (literary, social, political), before it acquires much historical significance. But these things are also true of all the other forms of evidence humanists invoke.

It seems likely that humanists are reluctant to take this kind of evidence seriously not because they find it too loose and indeterminate, but because they fear that the superficial certainty of quantitative evidence will seduce people away from more difficult kinds of interpretation. This concern can easily be exaggerated. If an awareness of social history doesn’t prevent us from reading sensitively (and I don’t think it does), then the much weaker evidence provided by text-mining isn’t likely to do so either. I’m reminded of an observation Matt Yglesias made in a different (political) context: that people are in general liable to take “an unduly zero-sum view of human interactions.” Different kinds of evidence needn’t be construed as competitive; they might conceivably enrich each other.

By tedunderwood

Ted Underwood is Professor of Information Sciences and English at the University of Illinois, Urbana-Champaign. On Twitter he is @Ted_Underwood.

3 replies on “On the imperfection of the Google dataset, and imperfection in general”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s