I enjoy reviewing papers even knowing it sucks up too much of my time. I mean what better way is there to get out any inner angst than to take it out on the writers of a sub par paper? (That’s a joke people.) Reading Michael Nielsen’s post taking on the h-index (Michael’s posting more these days!), reminded me of a problem I’ve always wondered about for reviewing.
Suppose that in the far far future, there are services where you get to keep control of your academic identity (like which papers you authored, etc.) and this method is integrated with reviewing systems of scientific journals. (I call this Dream 2.0.) One benefit of such a setup is that it might be possible for you to get some sort of “credit” for your reviewing (altruists need read no further, since they will find any such system useless.) But the question is how should one measure “credit” for reviews. Certainly there is the number of reviews performed. But is there a better way to measure the “quality” of a reviewer than simply the number of reviews performed. Or the number of different journals for which the person reviews? Ideally I would think such a measure would punish a reviewer for letting papers through which never get cited. Or maybe punish a reviewer for not letting a paper through which gets accepted somewhere else and is successful. As a start, I think Cosma’s observation
…passing peer review is better understood as saying a paper is not obviously wrong, not obviously redundant and not obviously boring, rather than as saying it’s correct, innovative and important.
is probably a nice place to start.
Oh, and I certainly won’t claim that it is a problem of any real significance (this is to stop the Nattering Nabobs of Negativity from yelling at me) It’s just a fun metric to try and define. Sort of like the quarterback rating.