How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Rejections, revisions, journal shopping and time… more and more time

I read a great news item from Nature, called “Does it take too long to publish research?” and wanted to highlight it here. In  particular, I thought that early career researchers might relate to the stories of featured researchers’ multiple rejections: there is some consolation in hearing others’ experiences. (Recently rejected authors might also seek advice in a great piece from The Scientist in 2015: Riding out rejection.) Also, I wanted to write my reflections, identifying some reasons for rejection (these appear in bold, throughout, in case you want to scan for them).

Whilst I’m on the topic of rejection stories: a recent episode of Radio 4’s The Life Scientific featured Peter Piot, who described (if I understood correctly) how difficult it was to get his research on HIV published in the 1980s because it was so groundbreaking that reviewers could not accept it. He knew that his findings were important and he persevered. So that could be one reason for rejection: you’re ahead of your field!

(Peter Piot also described his time working for the United Nations, in what was essentially a break from his academic career: if you’re interested in academic career breaks then you could take a look at the Piirus blog!)

Anyway, back to the Nature news item, where I picked up particular themes:

  1. Authors will have been rejected a number of times before they are even peer reviewed: a “desk rejection”. One of the authors featured was glad to finally get revisions after so many rejections without explanation. Without explanation, we can’t know what the editors’ decisions were based on, but as I noted in an earlier post, editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines.
  2. Journals do report on time to publication, but that doesn’t always include the time you’ve spent on revisions. If you resubmit after making revisions then the clock is re-started at the resubmission date, at some journals. Likewise, I have read (or heard: sorry, I can’t find the link) elsewhere that the reported rejection/acceptance rates don’t count papers which are invited for re-submission with revisions, as a rejection. So you might feel rejected when you have to make so many revisions but in statistical terms your paper has not been rejected (yet!). There is still time for it to be rejected after you have resubmitted, of course, and that probably happens more often than you think. Some think that journals are not counting and reporting fairly and I think there is room for improvement but it’s a complex area.
  3. Top journals can afford to be more picky and so the bar seems to have been raised, in terms of requirements for publication (hence increased numbers of authors per paper, who bring more data between them). As the Nature news item says: “Scientists grumble about overzealous critics who always seem to want more, or different, experiments to nail a point.”
  4. Rejections could be as a result of the authors “journal shopping”, whereby they submit to top/high impact journals first and work down a list. This is possibly due to a reliance on the reputation and impact factor of the journal where an article is published by those who hire and fund researchers. Researchers who target journals in the middle range of impact factor seem to stand the best chance of a quick review turnaround, but it seems that researchers are taking the risk of rejection and slower publication in order to stand a chance of appearing in a top journal.
  5. Journal editors and publishers are trying to ensure that the publication process is not slowed down, wherever possible. I’d like to quote one nice example of such attempts: “In 2009, Cell also restricted the amount of supplemental material that could accompany papers as a way to keep requests for “additional, unrelated experiments” at bay.” However, the Nature News item also points out the increased volume of papers to be processed and additional checks that papers might go through these days, for example plagiarism screens, animal welfare reports, competing interest disclosures, etc. Plagiarism screens can be tough: I remember an author telling me about how his paper was rejected for what amounted to self-plagiarism.
  6. The peer review process does take time and at different journals this process might be quicker or slower, but even though (as I’ve previously blogged) there are pressures on peer review system, it is not taking longer than it used to, on average. Neither has the digital world sped it up. The News item goes on to recount some of the innovations around peer review that various journals and publishers are implementing.

This made me think that there’s got to be a project somewhere, for someone to classify the revisions asked for in peer review processes and then count which is the most common. Reasons in my list so far:

  • poorly/not succinctly written (i.e. not intelligible!)
  • too little explanation/text
  • abstract does’t reflect findings
  • ethical issues with the data presented
  • ethical issues with the method
  • method unsuited to question
  • conclusions are over-reaching
  • needs to be set in context of other (specific/non-specific) research & add citations

These could be areas to be revised or indeed, reasons for rejection. I’m sure that there are more issue types and that my list is not complete, so feel free to share some more in the comments.

I know that some authors take the revision suggestions and do not resubmit to the journal that reviewed their article, but withdraw their article from that journal and then submit to one lower on the prestige list, thereby perhaps side-stepping another rejection. And thereby apparently achieving publication more quickly, for the second (or fifth or fifteenth) choice journal could not know of the time that an article spent, awaiting the verdict of a different journal. Perhaps that is why journals prefer to count their publication time from the date of resubmission: they don’t know either, if an article will ever be resubmitted. And is it fair of an author to use a journal’s peer review process to polish their article, but not actually publish with that journal? A complex area, like I said already.

Well, if all this complexity has put you in need of cheering up, then I must recommend the Journal of Universal Rejection to you. If you don’t laugh then you might cry…

Do data librarians need soft skills or technical skills? Video clips from Frankfurt book fair

Last year I was lucky enough to attend the Frankfurt book fair, and took part in a panel session for Elsevier. They have produced some lovely little video clips, for those of you who weren’t there. Take a look at the clips, listed below: if you have time for just one, then I recommend that you watch Noelle’s summary (clip no. 6).

01 Dr. Heiner Schnelling on his “Library Dream Team

02 Dr. Heiner Schnelling on traditional library skills in the future

03 Jenny Delasalle & Heiner Schnelling on engaging researchers

04 Jenny Delasalle on skills to manage data

05 Claus Grossmann on Elsevier content solutions

06 Noelle Gracy on whether technical skills trump soft skills