28 January 2007

Good News on ACL Reviews

I'm reviewing for ACL again this year (in the machine learning subcomponent). A couple of days ago, I received my notice to start bidding on papers (more on bidding below). The email came with the following note:

Naturally, reviewers have been chosen to assess papers based on their own expertise and outlook. Having said this, we are aware that ACL has sometimes been perceived, especially in recent years, as overemphasizing the pursuit of small incremental improvements of existing methods, perhaps at the expense of exciting new developments. (ACL is solid but boring, is what some people would say.) While we believe that it would be counterproductive to change course radically -- We certainly would not want to sacrifice solidity! -- we would like to encourage you, as a reviewer, to look out particularly for what's novel and interesting, even if this means accepting a paper that has one or two flaws, for example because it has not been evaluated as rigourously as you would like. (It is for you to judge when a flaw becomes a genuine problem.)
I think this is fantastic! (Would someone who is reviewing---i.e., on the PC---for another area confirm or deny that all areas got such a message, or was it just ML?) One difficulty I always have as a reviewer is that I assign scores to different categories (originality, interest, citations, etc.) and then am asked to come up with a meta-score that summarizes all these scores. But I'm not given any instruction on how to weigh the different components. What this note seems to be doing is saying "weigh interest higher than you usually would." In the past two years or so, I've been trying to do this. I think that when you start out reviewing, it's tempting to pick apart little details on a paper, rather than focusing on the big picture. It's been a conscious (and sometimes difficult) process for me to get over this. This explicit note is nice to see because it is essentially saying that my own internal process is a good one (or, at least, whomever wrote it thinks it's a good one).

I also think---in comparison to other conferences I've PCed for or reviewed for---that ACL does a really good job of moderating the bidding process. (For those unfamiliar with bidding... when a paper gets submitted, some area chair picks it up. All papers under an area chair are shown---title plus abstract---to the reviewers in that area. Reviewers can bid "I want to review this," "I don't want to review this," "I am qualified to review this," or "Conflict of interest." There is then some optimization strategy to satisfy reviewers preferences/constraints.) In comparison to ECML and NIPS in the past, the ACL strategy of dividing into area chairs seems to be a good thing. For ECML, I got a list of about 500 papers to select from, and I had to rank them 1-5 (or 1-10, I don't remember). This was a huge hassle.

It seems that of most of the conferences that I'm familiar with, ACL has a pretty decent policy. While I would be thrilled to see them introduce an "author feedback" step, everything else seems to work pretty well. In the past, I've only once gotten in to a real argument over a paper with other reviewers --- most of the time, all the reviewer scores have tended to be +/- 1 or 2 (out of ten) of each other. And for the times when there is an initial disagreement, it is usually resolved quickly (eg., one reviewer points out some major accomplishment, or major flaw, in the paper that another reviewer missed).

2 comments:

Anonymous said...

> Would someone who is reviewing---i.e., on the PC---for
> another area confirm or deny that all
> areas got such a message, or was it just ML?

Hal,

This was sent to all reviwers in ACL'07.

Best
Jochen

Anonymous said...

酒店經紀PRETTY GIRL 台北酒店經紀人 ,禮服店 酒店兼差PRETTY GIRL酒店公關 酒店小姐 彩色爆米花酒店兼職,酒店工作 彩色爆米花酒店經紀, 酒店上班,酒店工作 PRETTY GIRL酒店喝酒酒店上班 彩色爆米花台北酒店酒店小姐 PRETTY GIRL酒店上班酒店打工PRETTY GIRL酒店打工酒店經紀 彩色爆米花