Further thoughts on Peer Review & speeding up traditional journal publication

Back in January, I wrote about Peer Review. It’s a big topic! Here are some more reflections, following on from my last blog post about it.

Speeding things up, in journal article publication. (On “Peer review takes a very long time”)

picture of a pocket watch

I wrote that peer review “takes a very long time” because many scholars want to get their work out there to be read, as soon as possible. Of course, this is a loose concept and “a very long time” is relative. Some might think that I am criticising publishers for being slow, but I’m not pointing the finger of blame! I know that publishers have been addressing the issue and peer review has sped up in recent times, especially since there is now software to can help track it: SPARC has a handy round-up of manuscript submission software. However, the peer reviewers themselves must respond and they are under a lot of pressure. The system can only be as fast as the slowest reviewer, and there are all sorts of (entirely understandable) circumstances that might slow an individual down.

I should take a look at some of the developments that have helped to speed up traditional scholarly communication, though:

Scholarly publishers have invested in initiatives like Sage’s OnlineFirst to help peer reviewed research articles to reach audiences before journal issues are complete, thus cutting publication waiting periods.

Some publishers have also introduced mega journals with cascading peer review systems, which are also often based on Gold Open Access. Impact Story’s blog has a great post about how authors can make the most of these types of journal.  These speed up an article’s time to publication because after a peer review that led to rejection from one title, your paper can get fast-tracked through to publication in the next “tier” title at the same publisher, without the need to submit again and start the process from the very beginning.

And of course, as a librarian I should mention the sophisticated alerting services that help researchers to find out about each others’ papers as soon as possible: researchers are no longer dependent on the print copy landing on their desk, and finding the time to browse through the table of contents!

Putting it online yourself is quicker: why not try that?

Some research repositories might take non-peer-reviewed content, and in theory, authors could always put a copy of their work on a personal web-page before peer review if they’re confident in it and just want it out there. There are disciplinary differences in authors’ reactions to this idea. This article in PLOS Biology makes the case for the biology community following in the footsteps of physics, in using pre-print servers to share such early versions. Its authors point out that there are benefits to doing this, including:

Posting manuscripts as preprints also has the potential to improve the quality of science by allowing prepublication feedback from a large pool of reviewers.

Many authors would not share their early manuscripts in this way, because they value peer review as a process of polishing their work. I think this is a reason for peer review to take place in the open, because then it becomes apparent just how important a contribution a peer reviewer might have made to a paper. As I said in my previous post, peer reviewers should get credit for their work, but perhaps I should have made it clear that I’m not talking about it looking good on their CV, or their peer review activity going down well with their Head of Department!

 

Even authors who are happy to share un-polished pre-peer-review versions of their work (aka pre-prints, aka manuscripts) might be wary if it is not the norm in their discipline, because it might prejudice their chances of publication in the big-name journals of their field. Authors will likely have to agree to clauses stating that the work has not previously been published elsewhere. When I worked at the University of Warwick, in the early days of their institutional repository we surveyed a number of big publishers to ask if they would consider repository deposit to constitute prior publication, and thus a breach of this kind of clause in their authors’ agreement. Some said yes, some said no.

This is not such a clear area for authors, and for many it’s not worth the time of enquiring or the risk of finding out the hard way, i.e. through rejection of their article because plagiarism detection software identifies it as previously published online. Researchers need the quality “badge” that a journal gives them, for their CV and their institution’s performance review processes: publishing articles is not all about communication to other researchers, but it is also about kudos.

 

For some authors therefore (I would guess most), the earliest version they might share would be a post-peer-review version (sometimes called a post-print, sometimes called an author’s final version), which if there are no embargo periods from the publisher, would become available at the same time as their article became available through an OnlineFirst scheme.

 

 

Post peer review: commentary and altmetrics

I mentioned post publication peer review in my previous post: I thought about it as an alternative to peer review then, and perhaps I should think about it more as something that is complementary to peer review. Perhaps peer review doesn’t need to be either traditional or post publication but it is already really a process that doesn’t end with publication.

 

There are many ways that researchers are sharing and commenting on each others’ work after it has been published, therefore after the peer review process for traditional articles. We can track these interactions on sites like Researchgate and Mendeley, and through altmetrics software that collates data on such interactions… but altmetrics and its role is a subject I’ve looked at separately already, and it’s one I’m likely to return to again later!

So many online content platforms: where should you put research outputs?

You can deposit your work in LOADS of places*, but how do you choose where to bother depositing? Here’s what I’d look for when considering depositing my work somewhere online:

1- “linkedness“, ie they link to or feed into other tools you like or initiatives you are expected to take part in. Eg you can get altmetric.com data for your article from your institutional repository, ImpactStory data from Figshare and PLoS data from your publisher, or you can get an automatic tweet out of it, or your information will be used for your University’s website & performance reviews…

2- “long term”, ie there will be investment to retain the service for a while, preferably retaining the features that you value. Your work would thus be preserved, and your effort of depositing would have lasting benefits.

3- presentation: they should make you/your research look good!

4- discoverability: they should make your work discoverable, preferably in different ways to what your publisher might already do for your work. eg repository cross search tools like BASE, eg feeding webpages on Uni site.

5-attention-bringing: they might also include an element of “publicising” of your research too, eg they tweet the headings of all content added. They might also raise your reputation by association, eg they present your work alongside the top researchers in your field. To my mind, this is also a reason to look out for subject specialist sites/collections.

6- accessiblity: if they improve upon the accessibility to your work that your publisher gives, then this is also a reason to deposit. (eg repository versions that are open access)

What else could be added to this list? What examples are there of things that matter to you/your research community?

*Figshare, your institutional repository, subject repositories, Mendeley, Researchgate, your own webpage to name a few…

“Extreme Open Access”

This was the intriguing title of yesterday’s public seminar at Humboldt Uni’s IBI, delivered by Laurent Romary who is Director of Research at INRIA at present, who has held other prestigious posts and who has long been an open access visionary. In fact his seminar also has a subtitle: “scholarly publication as a public infrastructure”, but I figured that just the short version might be more intriguing for the automated tweet from this blog!

You can watch online recordings of IBI’s BBK seminars in full (watch out for this one: it’s in English!), but below is my summary to intrigue you further…

For those wishing to learn about Open Access (OA), Peter Suber’s book was metaphorically described as a bible!

I agree wholeheartedly with Romary’s view that it is better for scholars and universities to think about scientific information policies, rather than OA, and that we should anticipate that what we do with our publications will have consequences for what happens to and what we should do with, our research data. Matters of cost, quality, useability and visibility are systemic and when we publish articles online then we have an opportunity to also use article-level metrics. University Vice-Chancellors and directors should understand such mechanisms and the opportunities available.

Romary displayed profits from Elsevier from 2002-2011 and commented that learned societies are often playing the same game (chasing profit!) as publishers, before going on to look at OA possibilities. He dwelt on the 2003 Berlin declaration on OA, which I believe is an ideal: it includes the right to copy further and for an item’s availability to be irrevocable. “Extreme Open Access” indeed!

In my opinion, the Berlin declaration version of OA is the kind of OA that institutional repository managers would love to have but can’t all reach. In my experience, it was an uphill battle to get content at all, never mind getting it deposited along with a true understanding of licensing rules and copyright: this was a hurdle that a number of repository managers in the UK chose to save for later. But the Berlin declaration version of OA is definitely something to aim for!

Romary’s description of green & gold OA was very careful to explain that the two can work alongside each other, and that the one does not exclude the other. In fact, he described how this could work very harmoniously in a “freemium” model, which is similar to the way “Only Connect… Discovery pathways, library explorations, and the information adventure.” (the “unbook” that I contributed to & blogged about here) was published. The unbook is free in its html format but also available to buy in e-book download format or in print: similarly, journal articles that have been paid for as gold OA articles or that appear in subscription journals can also be deposited into repositories, for green OA. The paid-for version will have advantages to the person who pays, eg the reader’s experience & choice of formats, or the article is deposited into the repository on behalf of the gold OA fee-paying author.

INRIA, where Romary works has an information policy, which includes a mandate to deposit into the repository, and (crucially, in my view) assessments and reports on staff will all be carried out based on the input of repository publications. I asked how well the mandate was being adhered to, but apparently it’s early days yet. There is a centralised budget in order to monitor payments of APCs, and presumably this can be balanced against subscription costs, although INRIA’s researchers may publish more than they read (if I understand correctly), so gold OA looks like being very expensive for them.

The OpenEdition and Episciences projects from France sounded particularly interesting, as a way of integrating a repository into broader research and publishing infrastructures. At this point, Romary described that a repository has to be sophisticated. (Yes please, but who will pay for that?!) By way of sophisticated, he elaborated on the importance of authority lists for authors, institutions and projects, of persistent identifiers, and of long-term archiving capability. I think that all repository managers would aim for that, but different repositories achieve it to different degrees.

It is precisely this fragmented repository environment that Romary described as a big challenge for the academic community, if they are to make the most of their repositories and of their publications. The advent of scientific social networks (like researchgate, mendeley, academia.edu, etc) does not help with this fragmentation. But the good news at the end was that we are still learning and developing an infrastructure that could serve the public and indeed be labelled as extreme open access.

At the end of the presentation, we discussed some further related issues, including whether peer review is a good mechanism for ensuring quality, and the advantages of open peer review. Perhaps more on those themes in a separate blog post…