Choosing scholarly journals: peer review, time and rejection rates

This post is part of a mini-series that I’m creating, about choosing where to publish, aimed at early career researchers. If you haven’t got time to read it all, then maybe just scan for the most useful stuff in bold text!

I started this series with alternatives/additions to journal articles. Then I looked at the first two criteria, general reputation and suitability or relevance. And now I’m moving on to consider peer review, which of course influences reputation and is usually considered a sign of scholarly quality. We know that peer review is important and that journals which are not peer reviewed are often less highly regarded among scholars.

Connected to this is the time to publication, and rejection or acceptance rates at a journal, but these two aren’t easy pieces of information to find, to understand or to use in decision making about where to publish. In fact, I think they largely have little to contribute to that decision because they are so murky, except see the section on “when is it published?” because that is crucial for some authors.

These criteria are important for authors to understand and they also illustrate how useful it is to get to know journal editors. Should you ever find yourself in conversation with an editor, reviewer or author from the journal at the top of your wishlist, you might want to be prepared, so I’ve listed some questions towards the end of this post that could be helpful.

two silhouetted heads face each other, covered in colourful question marks
Ask an editor, if you can!

Peer review

If the journal is peer reviewed (sometimes also known as “refereed”) then this will weigh heavily in its favour, in terms of its reputation among the scholarly community. Peer review is used as a validation and polishing process, thus assuring the quality of research that you will find in a journal.

How do I find peer reviewed journals?

Directories of journals like Ulrich’s or Cabell’s will tell you whether a journal is peer reviewed or not (among other information) – if your institution has a subscription to one of these sources. And of course, you can check out journal home pages, for journals that you’re already aware of.

Note that Cabell have both a whitelist and a blacklist. The whitelist has lots more useful information for an author choosing where to publish than Ulrich’s does. But it has two major disciplinary gaps: Medicine and Engineering. Cabell’s blacklist covers all disciplines, and attempts to take over where Beall’s list left off: they consulted with Jeffrey Beall when deciding how to go about their blacklist, but didn’t just copy his list. I’m not covering so-called “predatory journals” in this blogpost (it’s coming soon!), but I thought it worth a mention at this stage.

Ulrich’s directory was historically designed for librarians choosing journals for a collection and covers way more titles than Cabell’s, so the two sources are rather different. Some years ago now, I asked Ulrich’s about journals that appear not to be refereed/peer reviewed (they use a little referee’s shirt symbol), and they told me that journals which have no symbol may in fact be refereed, but their data did not indicate it. So the directory is a starting point but you do need to check details yourself. (The University of Toronto have a video on how to use Ulrich’s if you’re interested in this.)

What do you mean “peer reviewed”?

The phrase “peer review” is not used to describe a standardised process: there are many different kinds of peer review, and some might appeal more to you as an author. A more rigorous process with more steps and more people might take more time, but result in a better quality article.

Some variations include:

  • Blind, double blind or open? This is about whether the authors and reviewers are aware of one another’s identity. Maybe you’re comfortable with not knowing who your reviewers are (blind): some argue that this frees reviewers to be more critical and therefore add to the quality of the article. Maybe you’d rather that they also didn’t know who you are (double blind). Or maybe you’d rather that everything was out in the open so that you each know who the other parties are: some argue that this makes reviewers more helpful and less off-hand or confrontational. Further, with some types of open peer review, the readers can also see attributed reviews and responses: this is both transparent and open peer review.
  • Transparent peer review. An article in the Scholarly Kitchen highlights the importance of transparency, where the content of the review process is available for all to read. It also describes more how transparent peer review works, including publication of author responses to peer review. The difference to open peer review lies in anonymity for reviewers.
  • Number of reviewers per article: there may be only two reviewers plus the editor, or some journals will use more reviewers. More people reviewing could also result in more requirements for you to polish your article since they could all bring different perspectives, some of which may be difficult for you reconcile. However some editors may help to consolidate reviewer comments: this is why it’s so worthwhile contacting someone already published with your journal of choice, to learn from their experience. If it’s your first journal article then a helpful editor is a real argument in favour of a journal! It is perhaps also a good sign (and useful information) if a journal has clear guidelines for peer reviewers on its website.
  • Stages of peer review: sometimes it’s not only about the number of people, but also the stages through which your article will pass. Maybe a third reviewer will be consulted only if the first two disagree about whether the article should be accepted or not. Or maybe the editor takes that decision. In some journals, an additional reviewer will be used to check for spelling, grammar, etc. A helpful diagram and explanation from Elsevier explains their system further.

At some journals, you may be asked to suggest suitable peer reviewers: my earlier blogpost about impressing editors has further discussion of peer review possibilities.

For more information on peer review, a recent post on the LSE Impact of Social Science blog discusses problems with traditional peer review and opportunities to improve it, and my round-up of 2016’s Peer Review week offers a light-hearted look at some of the main topics in this area.

Responding to peer review is beyond the scope of this post, but I’ve linked to a video clip from the excellent “Publish and prosper” wikispace tutorial, where you can hear a voice of experience. Basic, sensible advice is to make sure you respond to all of the peer reviewers’ comments.

Time to publication

This is not simple! See especially the section “When is it actually ‘published’ because there are pitfalls to avoid if you need not only publication but also citations within a tight time-frame.

open day planner
Time flies…

How long does it take?

The time from submission of your article, until it eventually appears in print (or is rejected) can vary a great deal from one journal to another, and across disciplines. For many journals, you’re looking at a full calendar year – at least. As explored in my post about impressing journal editors, time to publication can be influenced by authors getting their submission right at the outset, saving on the need for the article to travel backwards and forwards for re-submission or onwards to a new target journal. We often hear of tales like the student who submitted an article to the wrong section of a journal, resulting in delays (THE article on getting published, mentioned in my first post in this series). I’ve blogged about the loss of time at journals too, where you can find more discussion of journal processes which might lead to delays.

Some authors are most interested in the time before the acceptance/rejection decision is made, so that they can move on to submit to another journal or already advertise the accepted article. Some journals make that decision relatively quickly and they will usually advertise this if they do: Nature News reports a median of 100 days for such decisions among journals in PubMed, but read that piece for all the caveats. (See also below, where I reference the same piece again in relation to “resetting the clock”.)

How do you know, how long it will take?

There is no one handy source of information here: you must look on journal websites and ask around. Some publishers, like MLA will describe the process, including typical timeframes and what the outcomes of decision making will be. Note that their journals use editorial board meetings, so one question you could ask is, how often does the board meet? Maybe two different journals that you are comparing use the same process, but one has a board that meets twice a year, and another has a board that meets three times a year. You might think that the journal which has 3 editorial board meetings a year will process yours faster, but the volume of submissions can be difficult to estimate too: maybe there is a reason they have more meetings.

Journal websites and journal editors sometimes provide information, but (as with rejection rates see below), you should be very careful in interpreting this.

This is why I keep coming back to finding someone who knows your journal of choice, who can tell you about their experience.  To find an author at your institution who has (recently) published in a particular journal, note that you can search by date, journal title and author affiliation on databases like Web of Science (WoS) and Scopus.

When is it actually “published”?

Some journals have a really helpful feature where your article goes online as soon as it is accepted for publication. They might also deem the article at this point to be “published”, as regards the timeframes that they give you on their journal information webpages. At this point, you can advertise that your article has been accepted by that journal on your CV and online profiles/publication lists, and scholars can read and benefit from your research findings. This is great, but a word of warning: Elizabeth Gadd has written about her experience of waiting for a paper from 2016 which will not be officially published as the “version of record” until March 2019. The “version of record” is the one that gets a volume, part and page number so that it can be indexed in databases like WoS and Scopus, and indeed so that citations can be tracked and counted towards her scholarly profile. Or her institution’s scholarly record.

To find out about the gap between online release and formal publication, you could look at the most recently released journal articles on the online platform for your choice of journal, where they might also display the year in which they are expected to appear in a volume of that journal. Or indeed you could approach an editor (see my section on this, below).

Rejection rates

We could also talk about acceptance rates: 80% rejection, 20% acceptance: which sounds better to you? Related to the time to publication, rejection/acceptance rates could theoretically help you to be strategic in choosing a journal where you have a higher chance of acceptance. Or you might see high rejection rates as a sign of quality and you can afford the time to re-submit to a new journal, so it’s worth the risk – especially if you know that the journal is quick to make this decision. However, rejection rates might not be as helpful as they sound.

Ink stamp with stars and the word ACCEPTED

What are my acceptance chances?

Sometimes journal websites have information for submission, where they advertise an acceptance or rejection rate. Some publishers might issue reports, for example the American Psychological Association make data available in their Journal Statistics and Operations Data. You can also find information about journals in some subscription resources, for example The Modern Language Association (MLA) International Bibliography or Cabells’s Directory. Such data is usually supplied by journal editors/editorial staff. So if you want to find out the rate for a journal that is not publicly advertised, then it is the editorial team that you need to find a way to make contact with.

Even when you find, or are given a rate, be aware that there are no standards for calculating it and it could be an estimate. Furthermore, editors won’t want to make their journals look too exclusive with high rejection rates, thus discouraging quality submissions. Nor will they want to make it look too inclusive and therefore not good enough for high quality submissions. So they might measure, tweak or estimate rejection rates according to what they think looks best for their journal, so you just can’t compare one journal to another. It is possible that this information can really only be used to prepare you for almost inevitable rejection, or else to understand your just cause for celebration if your paper is accepted!

I’m not covering how to handle rejection here, but if it happens then do remember to thank the editor and be gracious.

Note that I deliberately titled this section “acceptance chances” because I wanted to point to my post on impressing journal editors again: you can influence your chance beyond whatever the figures say. If more than 50% of articles are rejected for not following journal submission guidelines then you can make sure that your article is not one of those.

Revise and resubmit is not a rejection

It’s fairly common that all articles which get sent forward for peer review are included in acceptance counts, even though as an author you might feel that your paper has not been accepted when the reviewers want you to “revise and resubmit”. Some papers may never be resubmitted, or in fact are submitted to a different journal and so would seem to be rejected by the first journal in effect, if not in the statistics.

Note that when papers are re-submitted, then sometimes this date of re-submission is taken as the date of submission when calculating the time to publication at a journal. A Nature news feature talks about this as “resetting the clock“.

Summary of time to publication & rejection rates

Both time to publication and rejection rates rely on information from editorial teams, which you might find on journal websites. But if you can’t find what you want then maybe you can find a way to make contact with an editor, and ask. If you get in touch with an editor, then make sure you make a good impression: you can ask about information not publicly advertised, but perhaps it is best to do so as part of a broader conversation.

Conferences are an ideal place to look out journal editors, and talk to them about the conference as well in some way, before asking about the journal’s processes. Rosalia da Garcia from SAGE publishing suggests making friends with editors. (Part of the “Publish and Prosper” wikispaces tutorial.)

Ten questions to ask those in the know

Make sure you’ve read all info on the journal website and other available sources before you ask. And cherry pick: which of these questions are of most interest to you? You don’t want the editor to feel interrogated! Maybe you could ask an author from the journal some of these instead (especially no. 6!).

Don’t forget to strike up a general conversation first, full of admiration for the journal and wonder at the mysteries of the publication process. And if possible, show that you’re familiar the latest editorial piece they’ve written, or you attended their talk at the conference.

  1. Are there any changes likely in the near future, to the peer review or publishing process? (Maybe express your own views on open peer review, or similar.)
  2. How long does it take before a reject/accept decision is made?
  3. What is their “pet hate” in terms of mistakes that submitting authors make?
  4. How much of a back-log of articles are waiting to be processed? (This might affect future rejection rates / time to rejection, or indeed substitute for the rejection rate when one is not shared.)
  5. What is the official rejection rate and does it include articles where the outcome is “revise and submit”?
  6. Does the editor help to reconcile directly opposite peer review comments?
  7. How often does a third/extra peer reviewer get consulted?
  8. After acceptance, how long before an article typically appears online?
  9. When does the “version of record” with volume, part and page number, which can be indexed in citation tracking sources, get issued?
  10. What do you look for in a peer reviewer? (Maybe say that you’re willing to act as a peer reviewer yourself, and explain your expertise.)

If you’re able to strike up a friendship, then perhaps you could even ask if the journal has unusually high acceptance rates at the moment!

two dogs silhouetted against sunset
Best friends forever… maybe!

Final thoughts

As I said at the beginning, these three criteria are not the most important when choosing a journal to publish in. They are, however, fundamental to understanding the scholarly publishing process. The suitability or fit of your work to the journal is far more important, and so, perhaps are features like Open Access or impact factors (both coming soon in this series!). But if there are several journals that might suit your work, then maybe this sort of information, or even your impressions on meeting editors could help you to narrow down your wishlist.

Advertisements

Reflections and a simple round-up of Peer Review Week 2016

It has been Peer Review Week this week: I’ve been watching the hashtag on Twitter with interest (and linked to it in a blogpost for piirus.ac.uk) and on Monday I attended a webinar called “Recognising Review – New and Future Approaches or acknowledging the Peer Review Process”.

I do like webinars, as I’ve blogged before: professional development/horizon scanning from my very own desktop! This week’s one featured talks from Paperhive and Publons, amongst others, both of which have been explored on this blog in the past. I was particularly interested to hear that Publons are interested in recording not only peer review effort, but also editorial contributions. (Right at the end of the week this year, there have been suggestions that editorial work be the focus of next year’s peer review week so it seems to me that we’ve come full circle.) A question from the audience raised the prospect of a new researcher metric based on peer review tracking. I guess that’s an interesting space to watch!

I wondered where Peer Review Week came from: it seems to be a publisher initiative if Twitter is anything to go by: the hashtag is dominated by their contributions. On Twitter at least, it attracted some publisher criticism: if you deliberately look at ways to recognise peer review then some academics are going to ask whether it is right for publishers to profit so hugely from their free work. Some criticisms were painful to read and some were also highly amusing:

There were plenty of link to useful videos, webpages and infographics about how to carry out peer review, both for those new to it and for those already experienced, such as:

(On this topic, I thought that an infographic from Elsevier about reasons why reviewers refused to peer review was intriguing.)

Advice was also offered on how / how not to respond to peer reviews. My favourite:

And there were glimpses of what happens at the publisher or editor level:

There wasn’t much discussion of the issue of open vs blind or double blind peer review, which I found interesting because recognition implies openness, at least to me. And there was some interesting research reported on in the THE earlier this month, about eliminating gender bias through double blind reviews, so openness in the context of peer review is an issue that I feel torn about. Discussion on Twitter seemed to focus mostly on incentives for peer review, and I suppose recognition facilitates that too.

Peer Review Week has also seen one of the juiciest stories in scholarly communication: fake peer reviews! We’ve been able to identify so much dodgy practice in the digital age, from fake papers and fake authors to fake email addresses so that you can be your own peer reviewer and citation rings. Some of this is, on one level, highly amusing: papers by Maggie Simpson, or a co-author who is, in fact your cat. But on another level it is also deeply concerning, and so it’s a space that will continue to fascinate me because it definitely looks like a broken system: how do we stick it all together?

Rejections, revisions, journal shopping and time… more and more time

I read a great news item from Nature, called “Does it take too long to publish research?” and wanted to highlight it here. In  particular, I thought that early career researchers might relate to the stories of featured researchers’ multiple rejections: there is some consolation in hearing others’ experiences. (Recently rejected authors might also seek advice in a great piece from The Scientist in 2015: Riding out rejection.) Also, I wanted to write my reflections, identifying some reasons for rejection (these appear in bold, throughout, in case you want to scan for them).

Whilst I’m on the topic of rejection stories: a recent episode of Radio 4’s The Life Scientific featured Peter Piot, who described (if I understood correctly) how difficult it was to get his research on HIV published in the 1980s because it was so groundbreaking that reviewers could not accept it. He knew that his findings were important and he persevered. So that could be one reason for rejection: you’re ahead of your field!

(Peter Piot also described his time working for the United Nations, in what was essentially a break from his academic career: if you’re interested in academic career breaks then you could take a look at the Piirus blog!)

Anyway, back to the Nature news item, where I picked up particular themes:

  1. Authors will have been rejected a number of times before they are even peer reviewed: a “desk rejection”. One of the authors featured was glad to finally get revisions after so many rejections without explanation. Without explanation, we can’t know what the editors’ decisions were based on, but as I noted in an earlier post, editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines.
  2. Journals do report on time to publication, but that doesn’t always include the time you’ve spent on revisions. If you resubmit after making revisions then the clock is re-started at the resubmission date, at some journals. Likewise, I have read (or heard: sorry, I can’t find the link) elsewhere that the reported rejection/acceptance rates don’t count papers which are invited for re-submission with revisions, as a rejection. So you might feel rejected when you have to make so many revisions but in statistical terms your paper has not been rejected (yet!). There is still time for it to be rejected after you have resubmitted, of course, and that probably happens more often than you think. Some think that journals are not counting and reporting fairly and I think there is room for improvement but it’s a complex area.
  3. Top journals can afford to be more picky and so the bar seems to have been raised, in terms of requirements for publication (hence increased numbers of authors per paper, who bring more data between them). As the Nature news item says: “Scientists grumble about overzealous critics who always seem to want more, or different, experiments to nail a point.”
  4. Rejections could be as a result of the authors “journal shopping”, whereby they submit to top/high impact journals first and work down a list. This is possibly due to a reliance on the reputation and impact factor of the journal where an article is published by those who hire and fund researchers. Researchers who target journals in the middle range of impact factor seem to stand the best chance of a quick review turnaround, but it seems that researchers are taking the risk of rejection and slower publication in order to stand a chance of appearing in a top journal.
  5. Journal editors and publishers are trying to ensure that the publication process is not slowed down, wherever possible. I’d like to quote one nice example of such attempts: “In 2009, Cell also restricted the amount of supplemental material that could accompany papers as a way to keep requests for “additional, unrelated experiments” at bay.” However, the Nature News item also points out the increased volume of papers to be processed and additional checks that papers might go through these days, for example plagiarism screens, animal welfare reports, competing interest disclosures, etc. Plagiarism screens can be tough: I remember an author telling me about how his paper was rejected for what amounted to self-plagiarism.
  6. The peer review process does take time and at different journals this process might be quicker or slower, but even though (as I’ve previously blogged) there are pressures on peer review system, it is not taking longer than it used to, on average. Neither has the digital world sped it up. The News item goes on to recount some of the innovations around peer review that various journals and publishers are implementing.

This made me think that there’s got to be a project somewhere, for someone to classify the revisions asked for in peer review processes and then count which is the most common. Reasons in my list so far:

  • poorly/not succinctly written (i.e. not intelligible!)
  • too little explanation/text
  • abstract does’t reflect findings
  • ethical issues with the data presented
  • ethical issues with the method
  • method unsuited to question
  • conclusions are over-reaching
  • needs to be set in context of other (specific/non-specific) research & add citations

These could be areas to be revised or indeed, reasons for rejection. I’m sure that there are more issue types and that my list is not complete, so feel free to share some more in the comments.

I know that some authors take the revision suggestions and do not resubmit to the journal that reviewed their article, but withdraw their article from that journal and then submit to one lower on the prestige list, thereby perhaps side-stepping another rejection. And thereby apparently achieving publication more quickly, for the second (or fifth or fifteenth) choice journal could not know of the time that an article spent, awaiting the verdict of a different journal. Perhaps that is why journals prefer to count their publication time from the date of resubmission: they don’t know either, if an article will ever be resubmitted. And is it fair of an author to use a journal’s peer review process to polish their article, but not actually publish with that journal? A complex area, like I said already.

Well, if all this complexity has put you in need of cheering up, then I must recommend the Journal of Universal Rejection to you. If you don’t laugh then you might cry…