24 November 2008

Supplanting vs Augmenting Human Language Capabilities

With an analogy to robotics, I've seen two different approaches. The first is to develop humanoid robots. The second is to develop robots that enhance human performance. The former supplants a human (eg., the long awaited robot butler); the latter augments a human. There are parallels in many AI fields.

What about NLP?

I would say that most NLP research aims to supplant humans. Machine translation puts translators out of work. Summarization puts summarizers out of work (though there aren't as many of these). Information extraction puts (one form of) information analysts out of work. Parsing puts, well... hrm...

There seems actually to be quite little in the way of trying to augment human capabilities. Web search might be one such area, though this is only tenuously an "NLP" endeavor. Certainly there is a reasonable amount of work in translation assistance: essentially fancy auto-completion for translators. Some forms of IE might look like this: find all the names, coreferenceify them and then present "interesting" results to a real human analyst who just doesn't have time to look through all the documents... though this looks to me like a fancy version of some IR task that happens to use IE at the bottom.

Where else might NLP technology be used to augment, rather than supplant?

  • A student here recently suggested the following. When learning a new language, you are often reading an encounter unknown words. These words could be looked up in a dictionary and described in a little pop-up window. Of course, if the definition (hopefully sense-disambiguated) itself contains unknown words, you'd need to recurse. He then suggested a similar model for reading Wikipedia pages: tell Wikipedia everything you know and then have it regenerate the "variational EM" page explaining things that you don't know about (again, recursively). This could either be interactive or not. Wikipedia is nice here because you can probably look up most things that a reader might not know via internal Wikipedia links.

  • Although really a field of IR, there's the whole interactive track at TREC that essentially aims for an interactive search experience, complete with suggestions, refinements, etc.

  • I can imagine electronic tutorials that automatically follow your progress in some task (eg., learning to use photoshop) and auto-generate text explaining parts where you seem to be stuck, rather than just providing you with random, consistent advice. (Okay, this starts to sound a bit like our mutual enemy clippy... but I suspect it could actually be done well, especially if it were really in the context of learning.)

  • Speaking of learning, someone (I don't even remember anymore! Sorry!) suggested the following talk to me a while ago. When trying to learn a foreign language, there could be some proxy server you go through that monitors when you are reading pages in this language that you want to learn. It can keep track of what you know and offer mouseover suggestions for words you don't know. This is a bit like the first suggestion above.
That's all I can come up with now.

One big "problem" with working on such problems is that you then cannot avoid actually doing user studies, and we all know how much we love doing this in NLP these days.

04 November 2008

Using machine learning to answer scientific questions

I tend to prefer research (and the papers that result from it) that attempt to answer some sort of question about natural language. And "can I build a system that beats your system" does not count as a question. (Of course, I don't always obey this model myself.) For instance, the original IBM MT papers were essentially trying to answer the question: can we build a model of translation directly from parallel data? Later work in phrase-based and syntax-based translation basically started out by asking the question: is syntactic structure (or local lexical structure) a useful abstraction for enabling translation? It seems the answer to all these questions is "yes."

A less broad type of question that one can ask might be categorized as: is some type of feature relevant for some task. For instance, are lexical semantics features derived via a hand-crafted resource (such as WordNet) useful for the task of coreference resolution? What about lexical semantics features derived automatically from the web? This type of question is the sort of thing that I looked at in an EMNLP 2005 paper. The answers seemed to be "no" and "yes" respectively.

There are several issues with trying to answer such questions. One issue is that typically what you're actually looking at is the question: can I figure out a way to turn WordNet into a feature vector that is useful for the task of coreference resolution? This is probably a partial explanation for why our community seems not to like negative results to such questions: maybe you just weren't sufficiently clever in encoding WordNet as features. Or maybe WordNet features are useful but there is some other set of features that's more useful that's just swamping the benefits of WordNet.

My first question is whether we're even going about this the right way. The usual approach is to take some baseline system, add WordNet features, and see if predictive performance goes up (i.e., performance on test data). This seems like a bit of a round-about way to attack the problem. After all, this problem of "does some feature have an influence on the target concept" is a classical and very well studied area of statistics: most people have probably seen ANOVA (analysis of variance), but there are many many more ways to try to address this question. And, importantly, they don't hinge on the notions of predictive performance. (Which almost immediately ties us in to "my system is better than yours.")

Now, I'm not claiming that ANOVA (or some variant thereof) is the right thing to do. Our problems are often quite different from those encountered in classical statistics. We have millions of covariates (features) and often complex structured outputs. I just wonder if there's some hint of a good idea in those decades of statistical research.

The other way to go about looking at this question, which at the time I wrote the EMNLP paper I thought was pretty good, is the following. The question is: why do I care if WordNet features are useful for prediction. Very rarely will they actually hurt performance, so who cares if they don't help? (Besides those people who actually want to know something about how language works.... though in this case perhaps it tells us more about how WordNet works.) The perspective we took was that someone out there is going to implement a coreference system from scratch (perhaps for a new language, perhaps for a new domain) and they want to know what features they should spend time writing and which ones they can ignore. This now is a question about predictive performance, and we did a form of backwards feature selection to try to answer it. If you look at Figure 2 in the paper, basically what you see is that you'd better implement string-match features, lexical features, count-based features, and knowledge-based features. On top of this, you can get another half point with inference features and another half again using gazetteers. But beyond that, you're probably wasting your time. (At the time of the EMNLP paper, one of this big "sells" was the count-based features, which are impossible in the standard pairwise-decision models that most people apply to this problem. I still think this is an interesting result.)

At any rate, I still think this ("what should I spend time implementing") is a reasonable way to look at the problem. But if you're moving to a new domain or new language, most likely all of your time/money is going to be spent annotating data, not implementing features. So maybe you should just go and implement everything you can think of anyway. And besides, if you want to make an argument about moving to different languages or different domains or even different amounts of training data, you should make those arguments specifically and do the experiments to back them up. For instance, how does Figure 2 in the EMNLP paper change when you have half the data, or a tenth the data? Maybe the lexical features don't matter as much and perhaps here WordNet might kick in?