A Position on Peer Reviewing in HCI, part 2

This post follows on from my previous post, in which I outline my position on peer reviewing and my reasoning for it.

In this post, I offer four observations in the form of a guide to serving as a good Associate Chair (AC).

[1]

A CHI paper submission typically represents 12-24 months of a research team’s work. That effort deserves the respect of a substantive and effortful review!

A substantive AC review includes the following:

  • A summary of the key criticisms and praise of reviewers;
  • A substantive articulation of the AC’s own criticisms and praise;
  • Thoughtful and constructive suggestions to improve the quality and acceptance chances of future versions of the work.

It’s nearly impossible to do all of the above in 2-3 sentences, so AC reviews do need a certain length to be just and effective.

It is an honor to be selected as an AC, a reflection of your community’s esteem for you. Be worthy of that or let someone else do it who will.

[2]

The AC is responsible for two very different jobs.

  1. ACs are charged with helping make a decision about whether or not something is accepted.
  2. ACs are also charged with providing constructive and worthwhile feedback to authors both to explain the decision and also to offer suggestions for improving the work (regardless of whether or not it is accepted).

Effective ACs do more of the latter than the former.

[3]

ACs have a second conflicting charge.

  • They are tasked with representing the reviewers
  • They are tasked with offering their own review

This double-task has some implications:

  • Authors deserve to know that their ACs actually read their work and didn’t merely summarize reviews; otherwise, why have ACs?
  • ACs need to respect their own reviewers’ reviews; otherwise, why have reviewers?
  • If ACs push in a different direction than their own reviewers, they need to
    • Faithfully represent and acknowledge what their own reviewers said, including and especially when they (the ACs) don’t agree with the reviewer(s)
    • Be accountable to their own position by stating very clearly why they disagree with their reviewers
    • Seriously consider asking for new reviewers, a 2AC, and/or discussion

Here is one way this problem creates arbitrary problems in the current system. Let’s say that an AC disagrees with her or his own reviewers and wants to give a paper a 2, even though that AC’s reviewers on average gave it a 4. What does the AC record for her or his score? Some ACs will average their own scores into the reviewers average, in this case offering a meta-review score of 3.5. Others will offer a meta-review score of 2. There is no consistency among ACs (I know this from experience). But since AC meetings use a numeric scale as the primary guide to the preliminary ranking of papers, those who have the first kind of AC will have an advantage over those who have the second kind of AC. That’s arbitrary.

[4]

ACs are responsible for their reviewers’ reviews.

Another way to say this is that ACs must serve as critics of reviewers’ reviews. Reviewers sometimes write poor quality reviews. A poor quality review is one where the critical judgment (accept/reject) is not rationally explained or justified and/or there are no constructive recommendations about how to strengthen the research moving forward (whether or not it is accepted). It’s an AC’s job to try to catch these early and do something about them. Here are some recommendations for dealing with common types of poor quality review.

  • Reviewers who rate themselves as “1 No Knowledge” should be seriously reconsidered. A reviewer is a critic, and a critic with no knowledge is not a critic at all. Instead of a defensible judgment, that person can only offer an impressionistic opinion. Imagine if a New York Times book reviewer wrote, “Well, I don’t know anything about contemporary fiction, but your library shouldn’t buy Murakami’s 1Q84, because I didn’t like it.”
  • Vacuous reviews of all types should be challenged or replaced. Examples:
    • The 2-sentence review
    • The typo correction review
    • The obsess over one tiny flaw review
    • The “this should go to a different conference so I’m not going to say anything about this paper” review
    • The empty fence-sitter review
  • Reviewers who have conflicts of interest are not acceptable. This is true not just when it’s known in advance, but also when it becomes clear subsequently. Conflicts are not just institutional! If a paper critiques a given scholar’s work, then there is a potential conflict in having that person as a peer reviewer.
  • Scores and reviews that don’t line up (e.g., a positive review rated 2; a highly critical review rated 4.5) should be explained and/or clarification from reviewers sought.
  • Anomalies deserve explanation (e.g., when reviewers give a wide distribution and the AC takes a side–again, that’s OK and our job to do–some explanation is needed so authors can understand why their ACs took that side).

When several of these sorts of low quality judgments occur in the reviews of the same paper–a short AC review, vacuous reviews, anomalous scores–it’s low quality and even meaningless feedback, and yet it has serious consequences.

———

CHI, DIS, CSCW, UIST, etc. are flagship conferences of the field; the papers submitted to them represent hundreds of hours of work; most rejected papers are going to be resubmitted in the future: therefore, out of respect for our own profession, we need to hold ourselves as ACs to high standards, especially until we can create better structural accountability.

Continue to part 3, where I make recommendations to the HCI community at-large!

1 Comment

  1. Jofish
    Permalink

    It’s nice to see this articulated in one place, and I think this piece lays out well the responsibilities of reviewers.

    Where I’d love to see you go a bit further is to address the structural problems in these conferences and processes that lead to these issues. As you mention, alt.chi’08 (thank you, thank you very much) and ’12 and cscw’12 made some fundamental steps in the right direction: can you characterize the commonalities there, and the opportunities? Can we, for example, continue with anonymous reviewing without a centralized database of all reviewers and ACs with reviews entered by authors, SCs and ACs? Or should we move to an un-anonymized version? Should chairs of smaller venues move first? (I’ve been considering what value and loss there would be from making all or most or some of our correspondence around CHI Panels next year publicly accessible and un-anonymous.)

    So yes, yes, yes, but I want to hear more!

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s