Peer review motivations and measurement

Yesterday’s blogpost by David Crotty on Scholarly Kitchen, outlines the problems with the notion of giving credit for peer review. It is very thought provoking, although I’m personally still keen to see peer review done in the open, and to explore the notion of credit for peer review some more. For me the real question is not whether to measure it, but how best to measure it and what value to set on that measure.

Both the blogpost and its comments discuss researchers’ current motivation for carrying out peer review:

  • To serve the community & advance the field (altruism?)
  • To learn what’s new in the field (& learn before it is published, i.e. before others!)
  • To impress editors/publishers (& thereby increase own chances of publication)
  • To contribute to a system in which their own papers will also benefit (self interest?)

Crotty writes that problems in peer review would arise from behavioural change amongst researchers if we change their motivation such that they will chase credit points. He poses some very interesting questions, including:

How much career credit should a researcher really expect to get for performing peer review?

I think that’s a great question! However, I do think that we should investigate potential ways to give credit for peer review. I’ve previously blogged about the problems with peer review and followed up on those thoughts and I’ve no doubt that I’ll continue to give this space more thought: peer review is about quality, and as a librarian at heart, I’m keen that we have good quality information available as widely as possible.

In David Crotty’s post I am particularly concerned by the notion that researchers, as currently intrinsically motivated, will be prepared to take on higher workloads. I don’t want that for researchers: they are already under enormous amounts of pressure. Not all academics can work all waking hours. Some actually do (at least some of the time), I know, but presumably someone else cleans and cooks for them (wives? paid staff?), and even if all researchers had someone to do that for them, it’s not fair to the researchers or even good for academia, to comprise such isolated individuals.

One commenter makes the point that all peer reviews are not alike and that some might take a day, some 20 minutes, so if credit is to be given along the lines of how many reviews someone has carried out, well this won’t be quite fair. And yet, as Crotty argued in his blogpost, if you complicate your measurement then it’s really overkill because no-one really cares to know more than a simple count. Perhaps that’s a part of what needs fixing with peer review: a little more uniformity of practice. Is it fair to the younger journals (probably with papers from early career researchers who don’t trust themselves to submit to the journal giants) that they get comparatively cursory time from peer reviewers?

Another comment mentions that the current system favours free riding: not everyone carries out peer review, even though everyone benefits from the system. The counterpoint to this is in another comment which points out that there is already a de facto system of credit, in that journal editors are aware of who is carrying out peer review, and they wield real power, reviewing papers and sitting on funding panels. I’m not sure that I’d want to rely on a busy editor’s memory to get the credit I deserved, but the idea reminded me of how the peer review system has worked up until now, and the issue seems to be that the expanding, increasingly international research and publishing community is no longer as close-knit as it once was.

There is a broader issue here. Crotty suggested that university administrators would not want researchers to take the time to do peer review, but to do original research all the time since that’s what brings in the money and the glory. But in order to be a good researcher (and pull in the grant funding), one has to read others’ papers, and be aware of the direction of research in the field. Plus, review papers are often more highly cited than original research papers, so surely those administrators will want researchers who produce review papers and pull in the citations? Uni rankings often use bibliometric data, and administrators do care about those!

What we’re really talking about, is ‘how to measure researchers’ performance’, and perhaps peer review (if openly measured) is a part of that but perhaps also not. I like the notion of some academics becoming expert peer reviewers, whilst others are expert department/lab leaders or grant writers, or authors or even teachers. We all have different strengths and perhaps it’s not realistic to expect all of our researchers to do everything, but if you want a mixture in your team then you need to know who is doing what.

I’d like to finish with Kent Anderson’s thoughtful comment about retaining excellent reviewers:

Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer).

Let’s think on it some more…

Advertisements

3 thoughts on “Peer review motivations and measurement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s