Choosing scholarly journals: peer review, time and rejection rates

This post is part of a mini-series that I’m creating, about choosing where to publish, aimed at early career researchers. If you haven’t got time to read it all, then maybe just scan for the most useful stuff in bold text!

I started this series with alternatives/additions to journal articles. Then I looked at the first two criteria, general reputation and suitability or relevance. And now I’m moving on to consider peer review, which of course influences reputation and is usually considered a sign of scholarly quality. We know that peer review is important and that journals which are not peer reviewed are often less highly regarded among scholars.

Connected to this is the time to publication, and rejection or acceptance rates at a journal, but these two aren’t easy pieces of information to find, to understand or to use in decision making about where to publish. In fact, I think they largely have little to contribute to that decision because they are so murky, except see the section on “when is it published?” because that is crucial for some authors.

These criteria are important for authors to understand and they also illustrate how useful it is to get to know journal editors. Should you ever find yourself in conversation with an editor, reviewer or author from the journal at the top of your wishlist, you might want to be prepared, so I’ve listed some questions towards the end of this post that could be helpful.

two silhouetted heads face each other, covered in colourful question marks
Ask an editor, if you can!

Peer review

If the journal is peer reviewed (sometimes also known as “refereed”) then this will weigh heavily in its favour, in terms of its reputation among the scholarly community. Peer review is used as a validation and polishing process, thus assuring the quality of research that you will find in a journal.

How do I find peer reviewed journals?

Directories of journals like Ulrich’s or Cabell’s will tell you whether a journal is peer reviewed or not (among other information) – if your institution has a subscription to one of these sources. And of course, you can check out journal home pages, for journals that you’re already aware of.

Note that Cabell have both a whitelist and a blacklist. The whitelist has lots more useful information for an author choosing where to publish than Ulrich’s does. But it has two major disciplinary gaps: Medicine and Engineering. Cabell’s blacklist covers all disciplines, and attempts to take over where Beall’s list left off: they consulted with Jeffrey Beall when deciding how to go about their blacklist, but didn’t just copy his list. I’m not covering so-called “predatory journals” in this blogpost (it’s coming soon!), but I thought it worth a mention at this stage.

Ulrich’s directory was historically designed for librarians choosing journals for a collection and covers way more titles than Cabell’s, so the two sources are rather different. Some years ago now, I asked Ulrich’s about journals that appear not to be refereed/peer reviewed (they use a little referee’s shirt symbol), and they told me that journals which have no symbol may in fact be refereed, but their data did not indicate it. So the directory is a starting point but you do need to check details yourself. (The University of Toronto have a video on how to use Ulrich’s if you’re interested in this.)

What do you mean “peer reviewed”?

The phrase “peer review” is not used to describe a standardised process: there are many different kinds of peer review, and some might appeal more to you as an author. A more rigorous process with more steps and more people might take more time, but result in a better quality article.

Some variations include:

  • Blind, double blind or open? This is about whether the authors and reviewers are aware of one another’s identity. Maybe you’re comfortable with not knowing who your reviewers are (blind): some argue that this frees reviewers to be more critical and therefore add to the quality of the article. Maybe you’d rather that they also didn’t know who you are (double blind). Or maybe you’d rather that everything was out in the open so that you each know who the other parties are: some argue that this makes reviewers more helpful and less off-hand or confrontational. Further, with some types of open peer review, the readers can also see attributed reviews and responses: this is both transparent and open peer review.
  • Transparent peer review. An article in the Scholarly Kitchen highlights the importance of transparency, where the content of the review process is available for all to read. It also describes more how transparent peer review works, including publication of author responses to peer review. The difference to open peer review lies in anonymity for reviewers.
  • Number of reviewers per article: there may be only two reviewers plus the editor, or some journals will use more reviewers. More people reviewing could also result in more requirements for you to polish your article since they could all bring different perspectives, some of which may be difficult for you reconcile. However some editors may help to consolidate reviewer comments: this is why it’s so worthwhile contacting someone already published with your journal of choice, to learn from their experience. If it’s your first journal article then a helpful editor is a real argument in favour of a journal! It is perhaps also a good sign (and useful information) if a journal has clear guidelines for peer reviewers on its website.
  • Stages of peer review: sometimes it’s not only about the number of people, but also the stages through which your article will pass. Maybe a third reviewer will be consulted only if the first two disagree about whether the article should be accepted or not. Or maybe the editor takes that decision. In some journals, an additional reviewer will be used to check for spelling, grammar, etc. A helpful diagram and explanation from Elsevier explains their system further.

At some journals, you may be asked to suggest suitable peer reviewers: my earlier blogpost about impressing editors has further discussion of peer review possibilities.

For more information on peer review, a recent post on the LSE Impact of Social Science blog discusses problems with traditional peer review and opportunities to improve it, and my round-up of 2016’s Peer Review week offers a light-hearted look at some of the main topics in this area.

Responding to peer review is beyond the scope of this post, but I’ve linked to a video clip from the excellent “Publish and prosper” wikispace tutorial, where you can hear a voice of experience. Basic, sensible advice is to make sure you respond to all of the peer reviewers’ comments.

Time to publication

This is not simple! See especially the section “When is it actually ‘published’ because there are pitfalls to avoid if you need not only publication but also citations within a tight time-frame.

open day planner
Time flies…

How long does it take?

The time from submission of your article, until it eventually appears in print (or is rejected) can vary a great deal from one journal to another, and across disciplines. For many journals, you’re looking at a full calendar year – at least. As explored in my post about impressing journal editors, time to publication can be influenced by authors getting their submission right at the outset, saving on the need for the article to travel backwards and forwards for re-submission or onwards to a new target journal. We often hear of tales like the student who submitted an article to the wrong section of a journal, resulting in delays (THE article on getting published, mentioned in my first post in this series). I’ve blogged about the loss of time at journals too, where you can find more discussion of journal processes which might lead to delays.

Some authors are most interested in the time before the acceptance/rejection decision is made, so that they can move on to submit to another journal or already advertise the accepted article. Some journals make that decision relatively quickly and they will usually advertise this if they do: Nature News reports a median of 100 days for such decisions among journals in PubMed, but read that piece for all the caveats. (See also below, where I reference the same piece again in relation to “resetting the clock”.)

How do you know, how long it will take?

There is no one handy source of information here: you must look on journal websites and ask around. Some publishers, like MLA will describe the process, including typical timeframes and what the outcomes of decision making will be. Note that their journals use editorial board meetings, so one question you could ask is, how often does the board meet? Maybe two different journals that you are comparing use the same process, but one has a board that meets twice a year, and another has a board that meets three times a year. You might think that the journal which has 3 editorial board meetings a year will process yours faster, but the volume of submissions can be difficult to estimate too: maybe there is a reason they have more meetings.

Journal websites and journal editors sometimes provide information, but (as with rejection rates see below), you should be very careful in interpreting this.

This is why I keep coming back to finding someone who knows your journal of choice, who can tell you about their experience.  To find an author at your institution who has (recently) published in a particular journal, note that you can search by date, journal title and author affiliation on databases like Web of Science (WoS) and Scopus.

When is it actually “published”?

Some journals have a really helpful feature where your article goes online as soon as it is accepted for publication. They might also deem the article at this point to be “published”, as regards the timeframes that they give you on their journal information webpages. At this point, you can advertise that your article has been accepted by that journal on your CV and online profiles/publication lists, and scholars can read and benefit from your research findings. This is great, but a word of warning: Elizabeth Gadd has written about her experience of waiting for a paper from 2016 which will not be officially published as the “version of record” until March 2019. The “version of record” is the one that gets a volume, part and page number so that it can be indexed in databases like WoS and Scopus, and indeed so that citations can be tracked and counted towards her scholarly profile. Or her institution’s scholarly record.

To find out about the gap between online release and formal publication, you could look at the most recently released journal articles on the online platform for your choice of journal, where they might also display the year in which they are expected to appear in a volume of that journal. Or indeed you could approach an editor (see my section on this, below).

Rejection rates

We could also talk about acceptance rates: 80% rejection, 20% acceptance: which sounds better to you? Related to the time to publication, rejection/acceptance rates could theoretically help you to be strategic in choosing a journal where you have a higher chance of acceptance. Or you might see high rejection rates as a sign of quality and you can afford the time to re-submit to a new journal, so it’s worth the risk – especially if you know that the journal is quick to make this decision. However, rejection rates might not be as helpful as they sound.

Ink stamp with stars and the word ACCEPTED

What are my acceptance chances?

Sometimes journal websites have information for submission, where they advertise an acceptance or rejection rate. Some publishers might issue reports, for example the American Psychological Association make data available in their Journal Statistics and Operations Data. You can also find information about journals in some subscription resources, for example The Modern Language Association (MLA) International Bibliography or Cabells’s Directory. Such data is usually supplied by journal editors/editorial staff. So if you want to find out the rate for a journal that is not publicly advertised, then it is the editorial team that you need to find a way to make contact with.

Even when you find, or are given a rate, be aware that there are no standards for calculating it and it could be an estimate. Furthermore, editors won’t want to make their journals look too exclusive with high rejection rates, thus discouraging quality submissions. Nor will they want to make it look too inclusive and therefore not good enough for high quality submissions. So they might measure, tweak or estimate rejection rates according to what they think looks best for their journal, so you just can’t compare one journal to another. It is possible that this information can really only be used to prepare you for almost inevitable rejection, or else to understand your just cause for celebration if your paper is accepted!

I’m not covering how to handle rejection here, but if it happens then do remember to thank the editor and be gracious.

Note that I deliberately titled this section “acceptance chances” because I wanted to point to my post on impressing journal editors again: you can influence your chance beyond whatever the figures say. If more than 50% of articles are rejected for not following journal submission guidelines then you can make sure that your article is not one of those.

Revise and resubmit is not a rejection

It’s fairly common that all articles which get sent forward for peer review are included in acceptance counts, even though as an author you might feel that your paper has not been accepted when the reviewers want you to “revise and resubmit”. Some papers may never be resubmitted, or in fact are submitted to a different journal and so would seem to be rejected by the first journal in effect, if not in the statistics.

Note that when papers are re-submitted, then sometimes this date of re-submission is taken as the date of submission when calculating the time to publication at a journal. A Nature news feature talks about this as “resetting the clock“.

Summary of time to publication & rejection rates

Both time to publication and rejection rates rely on information from editorial teams, which you might find on journal websites. But if you can’t find what you want then maybe you can find a way to make contact with an editor, and ask. If you get in touch with an editor, then make sure you make a good impression: you can ask about information not publicly advertised, but perhaps it is best to do so as part of a broader conversation.

Conferences are an ideal place to look out journal editors, and talk to them about the conference as well in some way, before asking about the journal’s processes. Rosalia da Garcia from SAGE publishing suggests making friends with editors. (Part of the “Publish and Prosper” wikispaces tutorial.)

Ten questions to ask those in the know

Make sure you’ve read all info on the journal website and other available sources before you ask. And cherry pick: which of these questions are of most interest to you? You don’t want the editor to feel interrogated! Maybe you could ask an author from the journal some of these instead (especially no. 6!).

Don’t forget to strike up a general conversation first, full of admiration for the journal and wonder at the mysteries of the publication process. And if possible, show that you’re familiar the latest editorial piece they’ve written, or you attended their talk at the conference.

  1. Are there any changes likely in the near future, to the peer review or publishing process? (Maybe express your own views on open peer review, or similar.)
  2. How long does it take before a reject/accept decision is made?
  3. What is their “pet hate” in terms of mistakes that submitting authors make?
  4. How much of a back-log of articles are waiting to be processed? (This might affect future rejection rates / time to rejection, or indeed substitute for the rejection rate when one is not shared.)
  5. What is the official rejection rate and does it include articles where the outcome is “revise and submit”?
  6. Does the editor help to reconcile directly opposite peer review comments?
  7. How often does a third/extra peer reviewer get consulted?
  8. After acceptance, how long before an article typically appears online?
  9. When does the “version of record” with volume, part and page number, which can be indexed in citation tracking sources, get issued?
  10. What do you look for in a peer reviewer? (Maybe say that you’re willing to act as a peer reviewer yourself, and explain your expertise.)

If you’re able to strike up a friendship, then perhaps you could even ask if the journal has unusually high acceptance rates at the moment!

two dogs silhouetted against sunset
Best friends forever… maybe!

Final thoughts

As I said at the beginning, these three criteria are not the most important when choosing a journal to publish in. They are, however, fundamental to understanding the scholarly publishing process. The suitability or fit of your work to the journal is far more important, and so, perhaps are features like Open Access or impact factors (both coming soon in this series!). But if there are several journals that might suit your work, then maybe this sort of information, or even your impressions on meeting editors could help you to narrow down your wishlist.

Advertisements

Choosing scholarly journals: first two criteria

This is my second post in a series that I’m building up, on choosing where to publish. Last time I looked at 10 alternatives/additions to the journal article. This post focuses more on journals themselves, and how you select the right one for your work. Remember that you should only submit to one journal at a time, and tailor your article to that journal.

A fairly recent piece in the THE “Want to be a successful academic? It’s all about getting published” focuses on 3 elements: impact factor, audience and rejection rates. But there are other elements to the decision that I don’t want to ignore. So I’m starting with a look at overall reputation and suitability, and then I’ll go on to look at rejection rates, among other topics in my next post.

Journal reputation

This is a really tricky topic! It is affected by other things which I’ll discuss in more detail in later posts, such as peer review processes and impact factors. However, you can also get a more instinctive feel or overall estimation for the reputation of a journal. It helps to read widely so that you can judge for yourself, and to be well networked and have lots of contacts whom you can ask. You may find review articles which rank journals in your field: do a literature search, and also see “Journal Quality Lists” in the St Johns University Library libguide. You can build a “wish list” of journals that you’d like to be published in. And then select the 3-5 most relevant to the research you want to publish now, to investigate further.

grape bunches hang from a vine
I heard it on the grapevine…

I also like Phil Davis’ Scholarly Kitchen discussion of a call for scientists to publish in journals that are linked to a scholarly society. His blogpost also points out that the journal brand matters to scientists and brands like “Nature, Science, The Lancet, JAMA, EMBO, PLOS, BioMed Central and many others” seem to function as a kind of recommendation for the work they present. So it matters which organisation(s) are behind the journal. You could start with an organisation that you know and trust, or if there’s a journal that you want to know more about then you could look at who is the publisher or commissioner, to see if you’re satisfied with their reputation and their approach. I found a lovely video on publishers that is part of the “Publish and Prosper” wikispaces tutorial.

Both societies and journal brands lend authority which is built from a long track record of quality. Longevity is a good sign, not only because quality processes and models that have developed through experience, but also because it means that the article stands a good chance of being available for posterity. However, longevity isn’t the only factor: innovation adds to quality too. Phil Davis calls for societies to learn from the commercial publishers and their journals. Being well-known and well-recognised is something that I think the commercial publishers have concentrated on. And if you already know of a journal then that will really help you to assess it’s suitability to your work. Which is why I started with the overall reputation of a journal.

Other signs of prestige include who is on the editorial panel, and who is already a published author with that journal: are these big names in your field? And of course, you should assess the quality of the articles published in a journal. I recommend my earlier overview blogpost with 12 questions to ask, if you’re not sure about the quality of a journal. The “Think. Check. Submit.” site has a great quick video, too.

Subject match/suitability

Also known as “relevance”, this is perhaps the most important criteria that you will consider! If your article isn’t a good match for the journal that you submit to, it will be rejected. And you don’t want to waste either your time waiting for that decision, or the journal editor’s time.

puzzle piece fits into the gap
Find the perfect fit

You know which journals you’re reading and citing yourself, and perhaps your contact network could also help, as I mentioned above. If you know someone who has already published with a journal that they recommend to you then they could be a source of really valuable advice about the publishing process.

If you know of key publishing houses for your discipline then it’s worth visiting their websites too: they often provide “journal finder” tools where you paste in your title and abstract, and their tool will suggest a journal or journals to you, which you can then investigate and consider.

Suitability is not always about the subject. It could be about the novelty of your work, or indeed that the journal specialises in negative findings or reproducibility studies, or some other kind of research. Sometimes suitability is about the style of your article in terms of the balance of words to diagrams, or the way you break down your work to fit in specific sections or headings. Don’t forget referencing style too: you need to be able to match the way that your journal of choice presents research articles. This is why familiarity with the journal can be an important critiera, because it will help you to match what they are looking for. You should at least read a journal’s aims & scope and descriptive materials, and preferably also any instructions for authors to be sure of what the journal’s expectations are.

A final thought on suitability

I’ve focussed on your work’s suitability to a journal, but you also need to think about the journal’s suitability to your research. This post doesn’t discuss open access (OA), but this is one criteria that could rule a journal out of consideration. If your research is funded then you may find that your funder, or even the institution where you’re based has a requirement for you to publish OA. So watch out for journals that can deliver the right kind of OA to match your funder or institutional requirements. More on that in a following blogpost, but for now I recommend the SherpaJuliet website to you.

Similarly, if there is a fee or cost to the author, for extra pages or for colour illustrations, or for open access, then you need to make sure that you can afford the fees.

The who factor

By now you will have noticed that I’ve bolded factors that are useful when you’re choosing a journal, and a few of these are to do with “who” is involved with a journal. While you’re busy checking them out on profile sites like ResearchGate and LinkedIn, why not try connecting with them? I can’t stress enough how useful contacts can be!  I wrote a quick and popular blogpost about 7 ways to make the first contact that you might also find helpful.

Also, look at where the researchers you admire are publishing, and which journals they are citing. After all, those are the researchers who you want to have read your article, so perhaps focus on journals that you know they read, based on their references lists.

In my next post, I’ll look more at peer review, rejection rates and time to publication.

Images: CC0 via Pixabay.

How do you assess the quality of recommendations?

I wrote here last year about the marvellous Fishscale of academicness, as a great way to teach students information literacy skills by starting with how evaluate what they’ve found.  I’m currently teaching information ethics to Masters students at Humboldt Uni, and this week’s theme is “Trust”: it touches on all sorts of interesting topics in this area, including recommendation systems, also known as recommendation engines.

An example of such a recommendation system in action would be the customer star ratings for products on Amazon, which are averaged out and may be used as a way to suggest further purchases to customers, amongst other information. Or reviews for hotels/cafes on Tripadvisor, film suggestions on Netflix, etc. Recommendations are everywhere these days: Facebook recommends apps you might like, and will suggest “people you may know” : LinkedIn and Twitter work in similar ways.

For me, these recommendations beg certain questions, which also turn up in debates about privacy and about altmetrics, such as:

How much information do you have to give them about yourself, do you trust them with it, and how good are their recommendations anyway? Are you happy to be influenced by what others have done/said online?

Recommendation systems use “relevance” algorithms, which are similar to those used when you perform a search. They might combine a number of factors, including:

  • Items you’ve already interacted with (i.e. suggesting similar items, called an item-to-item approach)
  • User-to-user: it finds people who are similar to you, eg they have displayed similar choices to you already, and suggests things based on their choices
  • Popularity of items (eg Facebook recommends apps to you depending on how much use they’ve had) Note that this may have to be balanced against novelty: new items will necessarily not have achieved high popularity.
  • Ratings from other users/customers (here, they might weight certain users’ scores more heavily, or average star ratings, or just preference items with a review)
  • Information that they already have about you, against a profile of what such a person might like (eg information gleaned from tracking you online through your browser or on your user profile on their site, or that you have given them in some way)

The sophistication of the algorithm used and the size of the data pool drawn on (or lack thereof) might also depend on the need for speed of the system.

Naturally, those working on recommendation engines have given quite a bit of consideration to how they might evaluate the recommendations given, as this paper from Microsoft discusses, in a relatively accessible way. It introduces many relevant concepts, such as the notion that recommending things that it knows you’ve already seen will increase your trust in the recommendations, although it is very difficult to measure trust in a test situation.

We see that human evaluation of these recommendation systems is important as “click through rate (CTR)” is so easily manipulated and inadequate as a measure of the usefulness of recommendations, as described and illustrated in this blog post by Edwin Chen.

Which recommendations do you value, and why? I also came across a review of movie recommendation sites from 2009, which explains why certain sites were preferred, which gives plenty of food for thought. From my reading and experience, I’d start my list of the kind of things that I’d like from recommendation systems with:

  • It doesn’t take information about me without asking me first (lots of sites now have to tell you about cookies, as the Cookie collective explain)
  • It uses a minimal amount of information that I’ve given it (and doesn’t link with other sites/services I’ve used, to either pull in or push out data about me, unless I tell it that it can!)
  • Suggestions are relevant to my original interest, but with the odd curveball thrown in, to support a more serendipitous discovery and to help me break out of the “filter bubble
  • Suggestions feature a review that was written by a person (in a language that I speak), so more than just a star rating
  • Suggestions are linked in a way that allows me to surf and explore further, eg filtering for items that match one particular characteristic that I like from the recommendation
  • I don’t want the suggestions to be too creepily accurate: I like to think I’ve made a discovery for myself, and I doubt the trustworthiness of a company that knows too much about me!

I’m sure there’s more, but I’m equally sure that we all want something slightly different from recommendation systems! My correspondence with Alke Groeppel-Wegener suggests that her students are very keen on relevance and not so interested in serendipity. For me, if that relevance comes at the expense of my privacy, so that I have to give the system lots of information about myself, then I definitely don’t want it. What about you?

The importance of evaluation

One of the things that I like so much about the chapter on “The fishscale of academicness”, by Alke Gröppel-Wegener and Geoff Walton, in the “Only Connect… ” book, is that it focuses on evaluating the information resources that you find.

Librarians often start information skills training, quite logically, with the tools that you need to find information resources. After all, you need to find stuff in order to evaluate it! Of course, Librarians do teach how to evaluate information resources (I used to base this on the classic question words: Who produced it & Where, What are they saying, How & Why are they writing, and When was it written? The University of Bath also has a handy checklist for handling academic sources), and Librarians have long recognised the importance of such skills when handling Internet information, but I like that the Fishscale technique puts the evaluation skills first. It seems appropriate, in the Google era.

Here is a link directly to the chapter, if you want to read more about The fishscale of academicness. It is also beautifully illustrated, by Josh Filhol.