A useful tool for librarians: metrics knowledge in bite-sized pieces By Jenny Delasalle

Here is a guest blogpost that I wrote for the new, very interesting Bibliomagician blog.

the Bibliomagician

Metrics_poster_verticalHaving worked in UK academic libraries for 15 years before becoming freelance, I saw the rise and rise of citation counting (although as Geoffrey Bilder points out, it should rightly be called reference counting). Such counting, I learnt, was called “bibliometrics”. The very name sounds like something that librarians should be interested in if not expert at, and so I delved into what they were and how they might help me and also the users of academic libraries. It began with the need to select which journals to subscribe to, and it became a filter for readers to select which papers to read. Somewhere along the road, it became a measurement of individual researchers, and a component of university rankings: such metrics were gaining attention.

Then along came altmetrics, offering tantalising glimpses of something more than the numbers: real stories of impact that could be found through online tracking. Context…

View original post 880 more words

Event reporting: An Open Science meet-up in Berlin

Last week I went along to an Open Science meet-up here in Berlin. It was hosted at the Centre for Entrepreneurship at the Technische Universitaet and the theme of the evening was

Academic Papers: collaboration, writing & discovery

There were presentations from two interesting, freshly developed collaboration tools for researchers:
  1. Paperhive –  About having conversations about a paper, such that if you don’t understand something you can ask a question and someone else will answer it.  It doesn’t create copies of papers but allows you to search for them and when you view the paper through their interface, you see the comments. Collaborative reading!
  2. Authorea –  Tool for co-authoring a paper, which apparently works with LATEX and Google docs and other formats besides. “puts emphasis on collaboration and structured, visual editing.” Collaborative writing!
Discussion at the meeting was interesting: it was led by Alex from Paperhive, who evoked the “spirit of open science”, i.e. collaboration and sharing. And we all did share: if you’re interested in such themes then take a look at Twitter conversations with the #openscience hashtag, as of course some folks tweeted at the event!
I chatted to fellow freelancers and to researchers including Franzi, who is involved in a citizen science project at Berlin’s Natural History Museum, and also Sebastian who works for an open access publisher – of great sounding digital books – Language Science Press.
I was left reflecting on how data sharing can be achieved, as opening access to papers is one thing, but opening your data and your whole science is another… being open at the beginning about methodologies can help people to join disparate studies together and share the same methodology to make the results of their research more powerful. But as ever, being open is just the start of the process because you also have to make yourself heard! What channels are there for doing this? And of course, we all of researchers who won’t release data because they want to get another 5 papers out of it themselves. Yet who can blame them in the publish or perish climate? What we measure and incentivise researchers for can have damaging effects, not least the salami slicing of research that would be far more meaningfully written up in a single paper, instead of across 6! How can we make open data itself the output? Well, such themes are big and not for me to worry about, thank goodness. Last week was also the LIBER conference in Helsinki and there the library mangers and repository and publishing folks were very busy discussing data related themes. Once again, Twitter gives a flavour of the kind of things discussed there.

Explaining the g-index: trying to keep it simple

For many years now, I’ve had a good grip on what the h-index is all about: if you would like to follow this blogpost all about the g-index, then please make sure that you already understand the h-index. I’ve recently had a story published with Library Connect, which elaborates on my user-friendly description of the h-index. There are now many similar measures to the h-index, some of which are simple to understand like the i10-index, which is just the number of papers you have published which have had 10 or more citations. Others are more difficult to understand, because they attempt to something more sophisticated, and perhaps they actually do a better job than the h-index alone: it is probably wise to use a few of them in combination, depending on your purpose and your understanding of the metrics. If you enjoy getting to grips with all of these measures then there’s a paper reviewing 108 author-level bibliometric indicators which will be right up your street!

If you don’t enjoy these metrics so much but feel that you should try to understand them better, and you’re struggling, then perhaps this blogpost is for you! I won’t even think about looking at the algorithms behind Google PageRank inspired metrics, but the g-index is one metric that even professionals who are not mathematically minded can understand. For me, understanding the g-index began with the excellent Publish or Perish website and book, but even this left me frowning. Wikipedia’s entry was completely unhelpful to me, I might add.

In preparation for a recent webinar on metrics, I redoubled my efforts to get the g-index into a manageable explanation. On the advice of my co-presenter from the webinar, Andrew Plume, I went back to the original paper which proposed the g-index: Egghe, L., “Theory and practice of the G-index”. Scientometrics, vol. 69, no. 1, (2006), pp. 131–152

Sadly, I could not find an open access version, and even when I read this paper, it is peppered with precisely the sort of formulae that make librarians like me want to run a mile in the opposite direction! However, I found a way to present the g-index at that webinar, which built nicely on my explanation of the h-index. Or so I thought! Follow-up questions from the webinar showed where I had left gaps in my explanation and so this blogpost is my second attempt to explain the g-index in a way that leaves no room for puzzlement.

I’ll begin with my slide from the webinar:

g-index

 

I read out the description at the top of the table, which seems to make sense to me. I explained that I needed the four columns to calculate the g-index, reading off the titles of each column. I explained that in this instance, the g-index would be 6… but I neglected to say that this is because this is the last row on my table where the total number of citations (my right hand column) is higher than or equal to the square of g.

Why did I not say this? Because I was so busy trying to explain that we can forget about the documents that have had no citations… oh dear! (More on those “zero cites” papers later.) In my defence, this is exactly the same as saying that the citations received altogether must be at least g squared, but when presenting something that is meant to be de-mystifying, the more descriptions, the better! So, again: the g-index in my table above is the document number (g) where the total number of citations is greater than or equal to the square of g (also known as g squared).

Also on reflection, for the rows where there were “0 cites” I should also have written “does not count” instead of “93” in the “Total number of citations” column, as people naturally asked afterwards why the g-index of my Professor X was not 9. In my presentation I had tried to explain what would happen if the documents with 0 citations had actually had a citation each, which would have yielded a g-index of 9, but I was not clear enough. I should have had a second slide to show this:

extra g-index

Here we can see that the g-index would be 9 because the 9th row has the total number of citations as higher than g squared, but in the 10th row the total number of citations are less than g squared.

My “0 cites” was something of a complication and a red herring, and yet it is also a crucial concept. Because there are many, many papers out there with 0 citations, and so there will be many researchers with papers that have 0 citations.

I also found, when I went back to that original paper by Egghe, that it has a “Note added in proof” which describes a variant where papers with zero citations, or indeed fictitious papers are included in the calculation, in order to provide a higher g-index score. However I have not used the variant. In the original paper Egghe refers to “T” which is the total number of documents, or as he described it “the total number of ever cited papers”. Documents that have never been cited cannot be part of “T” and that’s why my explanation of the g-index excludes those documents with 0 citations. I believe that Egghe used this as a feature of the h-index which he valued, i.e. representing the most highly cited papers in the single number, which is why I did not use the variant.

However, others have used the variant in their descriptions of the g-index and the way they have calculated it in their papers, especially in more recent papers that I’ve come across, so this confuses our understanding of exactly what the g-index is. Perhaps that’s why the Wikipedia entry talks about an “average” because the inclusion of fictitious papers does seem to me more like calculating an average. No wonder it took me such a long time to feel that I understood this metric satisfactorily!

My advice is: whenever you read about a g-index in future, be sure that you understand what is included in “T“, i.e. which documents qualify to be included in the calculation. There are at least three possibilities:

  1. Documents that have been cited.
  2. Documents that have been published but may or may not have been cited.
  3. Entirely fictitious documents that have never been published and act as a kind of “filler” for rows in our table to help us see which “g squared” is closest to the total number of citations!

I say “at least” because of course these documents are the ones in the data set that you are using, and there will also be variability there: from one data set to another and over time, as data sets get updated. In many ways, this is no different from other bibliometric measures: understanding which documents and citations are counted is crucial to understanding the measure.

Do I think that we should use the variant or not? In Egghe’s Note, he pointed out that it made no difference to the key finding of his paper which explored the works of prestigious authors. I think that in my example, if we want to do Professor X justice for the relatively highly cited article with 50 cites, then we would spread the total of citations out across the documents with zero citations and allow him a g-index of 9. That is also what the g-index was invented to do, to allow more credit for highly cited articles. However, I’m not a fan of counting fictitious documents. So I would prefer that we stick to a g-index where “T” is “all documents that have been published and which exist in the data set, whether or not they have been cited.” So not my possibility no. 1 which is how I actually described the g-index, and not my possibility no. 3 which is how I think Wikipedia is describing it. This is just my opinion, though… and I’m a librarian rather than a bibliometrician, so I can only go back to the literature and keep reading.

One final thought: why do librarians need to understand the g-index anyway? It’s not all that well used, so perhaps it’s not necessary to understand it. And yet, knowledge and understanding of some of the alternatives to the h-index and what they are hoping to reflect will help to ensure that you and the people who you advise, be they researchers or university administrators, will all use the h-index appropriately – i.e. not on its own!

Note: the slides have been corrected since this blogpost was first published. Thanks to the reader who helped me out by spotting my typo for the square of 9!

Quality checks beyond peer review? Retractions, withdrawals, corrections, etc

I often find myself reading/writing things about whether peer review is working or not, the opportunities for post publication peer review and about the changes needed in scholarly communication. An article in the THE earlier this year described a “secret dossier on research fraud” and the concerns it expresses are familiar, although I balk at the word “fraud”.  The THE article/its source claims that:

scientists and journals are extremely reluctant to retract their papers, even in the face of damning evidence

Perhaps the scientists don’t completely understand the processes that publishers use, nor indeed feel able to influence the consequences to their reputations which they must maintain in order to stand a chance of winning the next research grant and remain employed. I used to give workshops to budding researchers on “how to get published”, when I would explain something of the publishing process to them, and my final slide was all about corrections, errata and retractions: what is the difference between them, and why and how do they occur? (Quick answers below!) Even if the reason for retraction should bring no shame, but honour for admitting a mistake, researchers still don’t want to have an article retracted.

Perhaps in the days of print there was even more reason for stringency in avoiding post-publication alterations: after all, the version of record, the print article, would have been impossible to correct and researchers could only be alerted to any retractions or corrections through metadata records and, perhaps if they were avid readers of a journal then they might spot notices in later editions. However, I do wonder if, in the digital world, there is more room for post-publication alterations without shame, in the name of improving science. This is why it is important for researchers and publishers to work together to define the different categories of such alterations and what do they mean for a researcher’s reputation? There is a lack of clarity, which I think stems partially from a variety of practice with different journals, publishers or even database providers in how they describe and handle the various circumstances in which post-publication alterations are needed.

Corrections, corrigenda and errata are used by journals for minor corrections to a published work, eg name of an author was mis-spelled, or a title not properly capitalised, or also for a minor error in an amount mentioned, eg dosage. These are published in later issues in print, added to metadata records in the digital sphere, and also usually visible in the digital full text with a note in brackets after the corrected item. As a librarian, I’m interested in how this sort of information is transferred in metadata records: the U.S. National Library of Medicine website describes how these are usually all referred to as Errata in PubMed, and their page about this goes on to explain and categorise many different types of t

For me, these are a very good reason to ensure that you read the final published version of an article that you intend to cite: the green OA pre-print version of an article is useful for you to understand the work, but not the one I recommend citing.

Retractions are when an article is withdrawn: this is something that you can do as the author, or indeed your institution could do it on your behalf (sometimes also called a withdrawal, see below), or the editor or publisher of a journal can retract an article. Reasons for retraction of an article include a pervasive (but honest) error in the work, or sometimes might be for unethical practice. I can’t recommend the RetractionWatch blog highly enough for examples and stories of retractions. Sometimes you also hear about a partial retraction which might occur when only one figure or part of the conclusions is withdrawn, whilst the rest of the paper is sound.

Withdrawals are when a paper is no longer included in a publication, often when it has accidentally been published twice. I am increasingly hearing of fees being charged to authors for a withdrawal. Publishers usually have policies about what they consider to be grounds for a withdrawal: see Elsevier’s explanation of withdrawals and retractions, for example.

My explanations are a very light-touch introduction to the subject: publishers’ guidance will give you more of an idea about what might happen to your own articles, but I do see a variety of terminology and practice. My advice to academics is to never make assumptions that work which has been corrected or retracted is necessarily suspect, nor that it should affect a researcher’s reputation unless the whole story is known. Just like the reason why we can’t take bibliometric or altmetric scores as the whole picture of an academic’s worth: we always need context. If we all did this, then there would be no reason for authors to resist retraction, but I know that that is an ideal. Hence the story in the THE which I began with…

 

 

How do researchers share articles? Some useful links

This is a topic that interests me: how do researchers choose what to read? Where are the readers on our platforms coming from, when we can’t track a source URL? What are researchers doing in collaboration spaces? (Research processes are changing fast in the Internet era.) Is journal article sharing that is taking place legal and/or ethical? I’m a big fan of Carol Tenopir‘s work investigating readers’ behaviours and I think there’s much to learn in this area. Sharing an article does not equate to it having been read, but it is a very interesting part of the puzzle of understanding scholarly communication.

16649920968_f671108c56_z

Usage is something that altmetrics are displaying (the altmetric.com donut has a section for “Readers” which incorporates information from Mendeley), and it’s just possible that usage would become a score to rival the impact factor, when evaluating journals. It does often seem to me like we’re on a quest for a mythical holy grail, when evaluating journals and criticising the impact factor!

Anyway, what can we know about article sharing? In my last blogpost I highlighted BrightTALK as a way to keep up to date with library themes. The LibraryConnect channel features many useful webinars & presentations (yes, I spoke at one of them), and I recently listened to a webinar on the theme of this blogpost’s title, which went live in December 2015. My notes & related links:

Suzie Allard of the University of Tennessee (colleague of Carol Tenopir) spoke about the “Beyond Downloads” project and their survey’s main takeaways. These include that nearly 74% of authors preferred email as a method of sharing articles. Authors may share articles to aid scientific discovery in general, to promote their own work, or indeed for other reasons, nicely illustrated in an infographic on this theme!

Lorraine Estelle of Project COUNTER spoke about the need for comprehensive and reliable data, and to describe just how difficult it is to gather such data. (I can see that tracking everyone’s emails won’t go down well!) There are obviously disciplinary and demographic differences in the way that articles are shared, and therefore read, and she listed nine ways of sharing articles:

  1. email
  2. internal networks
  3. the cloud
  4. reference managers
  5. learning manager
  6. research social networks
  7. general social networks
  8. blogs
  9. other

Lorraine also introduced some work that COUNTER are doing jointly with CrossREF: DOI tracking and Distributed Usage Logging that are definitely worth further reading and investigation!

Wouter Haak from Elsevier spoke about what you can see about readers of your articles on Mendeley’s dashboard, as an author. He also spoke about a prototype they are developing for libraries, on which institutions could see the countries where collaborations are taking place from within their own institution. More intriguingly (to me), he talked about a working group that he was part of, whereby major scientific publishers are apparently agreeing to support sharing of articles amongst researchers within collaboration groups, on platforms like Mendeley, Academia.edu and ResearchGate, which he describes as “Scholarly Collaboration Networks”. Through such a collaboration, the sharing activity across these platforms could all be tracked and reported on. Perhaps it is easier to lure researchers away from email than to track emails!

 

[Photo credit: Got Credit]

Keeping up with academic library themes

Working mostly from home, I don’t talk to colleagues as often as I used to. Also, being freelance, I don’t have as much opportunity to attend training sessions and conferences as I used to have, but nevertheless, it’s important for me to keep in touch with developments in my discipline and improve my skills, just like Siobhan O’Dwyer described in the case of early career researchers. There are some sources that I particularly value for keeping me informed and up to date, which I wanted to highlight here:

  1. For keeping researchers and their needs in mind, good lunchtime entertainment: Radio 4’s Inside Science and The Life Scientific.
  2. BrightTALK channels: I like to listen to these whilst doing other stuff, and if they’re really good then I tune in and look at the slides too!
  3. Email lists & newsletters: Jiscmail for the UK and the ALA for the US. Daily digests help to keep it manageable to follow these. I also get a regular roundup of news from ResearchInformation.
  4. Blogs: I especially like dipping into the Scholarly Kitchen, RetractionWatch, LSE’s Impact of Social Sciences, Nature blogs and lately, Danny Kingsley of the University of Cambridge. The easiest way to follow such blogs? Twitter feeds!
  5. Twitter: I like to keep an eye on the following hashtags: #ecrchat, #uklibchat, #librarians #altmetrics #OA and recent discovery: #publishinginsights  Actually, I’ve been collecting academic hashtags along with colleagues from piirus.ac.uk, so if you want more then take a look!
  6. A MOOC? I did one MOOC module recently and blogged about it for my regular client, piirus. It was my first MOOC and it’s not an investment of time to be underestimated, but very much worthwhile. If you’re looking for one to suit you, then the platform for the one I did was edX, and you can find lots of courses on their site.

Finally, and this does count as a learning experience (honest!): I go to a local knitting group to pratice & keep up my German. It’s amazing what you can learn from such a group – and not only vocabulary!

What sources do you regularly turn to, or recommend?

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Rejections, revisions, journal shopping and time… more and more time

I read a great news item from Nature, called “Does it take too long to publish research?” and wanted to highlight it here. In  particular, I thought that early career researchers might relate to the stories of featured researchers’ multiple rejections: there is some consolation in hearing others’ experiences. (Recently rejected authors might also seek advice in a great piece from The Scientist in 2015: Riding out rejection.) Also, I wanted to write my reflections, identifying some reasons for rejection (these appear in bold, throughout, in case you want to scan for them).

Whilst I’m on the topic of rejection stories: a recent episode of Radio 4’s The Life Scientific featured Peter Piot, who described (if I understood correctly) how difficult it was to get his research on HIV published in the 1980s because it was so groundbreaking that reviewers could not accept it. He knew that his findings were important and he persevered. So that could be one reason for rejection: you’re ahead of your field!

(Peter Piot also described his time working for the United Nations, in what was essentially a break from his academic career: if you’re interested in academic career breaks then you could take a look at the Piirus blog!)

Anyway, back to the Nature news item, where I picked up particular themes:

  1. Authors will have been rejected a number of times before they are even peer reviewed: a “desk rejection”. One of the authors featured was glad to finally get revisions after so many rejections without explanation. Without explanation, we can’t know what the editors’ decisions were based on, but as I noted in an earlier post, editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines.
  2. Journals do report on time to publication, but that doesn’t always include the time you’ve spent on revisions. If you resubmit after making revisions then the clock is re-started at the resubmission date, at some journals. Likewise, I have read (or heard: sorry, I can’t find the link) elsewhere that the reported rejection/acceptance rates don’t count papers which are invited for re-submission with revisions, as a rejection. So you might feel rejected when you have to make so many revisions but in statistical terms your paper has not been rejected (yet!). There is still time for it to be rejected after you have resubmitted, of course, and that probably happens more often than you think. Some think that journals are not counting and reporting fairly and I think there is room for improvement but it’s a complex area.
  3. Top journals can afford to be more picky and so the bar seems to have been raised, in terms of requirements for publication (hence increased numbers of authors per paper, who bring more data between them). As the Nature news item says: “Scientists grumble about overzealous critics who always seem to want more, or different, experiments to nail a point.”
  4. Rejections could be as a result of the authors “journal shopping”, whereby they submit to top/high impact journals first and work down a list. This is possibly due to a reliance on the reputation and impact factor of the journal where an article is published by those who hire and fund researchers. Researchers who target journals in the middle range of impact factor seem to stand the best chance of a quick review turnaround, but it seems that researchers are taking the risk of rejection and slower publication in order to stand a chance of appearing in a top journal.
  5. Journal editors and publishers are trying to ensure that the publication process is not slowed down, wherever possible. I’d like to quote one nice example of such attempts: “In 2009, Cell also restricted the amount of supplemental material that could accompany papers as a way to keep requests for “additional, unrelated experiments” at bay.” However, the Nature News item also points out the increased volume of papers to be processed and additional checks that papers might go through these days, for example plagiarism screens, animal welfare reports, competing interest disclosures, etc. Plagiarism screens can be tough: I remember an author telling me about how his paper was rejected for what amounted to self-plagiarism.
  6. The peer review process does take time and at different journals this process might be quicker or slower, but even though (as I’ve previously blogged) there are pressures on peer review system, it is not taking longer than it used to, on average. Neither has the digital world sped it up. The News item goes on to recount some of the innovations around peer review that various journals and publishers are implementing.

This made me think that there’s got to be a project somewhere, for someone to classify the revisions asked for in peer review processes and then count which is the most common. Reasons in my list so far:

  • poorly/not succinctly written (i.e. not intelligible!)
  • too little explanation/text
  • abstract does’t reflect findings
  • ethical issues with the data presented
  • ethical issues with the method
  • method unsuited to question
  • conclusions are over-reaching
  • needs to be set in context of other (specific/non-specific) research & add citations

These could be areas to be revised or indeed, reasons for rejection. I’m sure that there are more issue types and that my list is not complete, so feel free to share some more in the comments.

I know that some authors take the revision suggestions and do not resubmit to the journal that reviewed their article, but withdraw their article from that journal and then submit to one lower on the prestige list, thereby perhaps side-stepping another rejection. And thereby apparently achieving publication more quickly, for the second (or fifth or fifteenth) choice journal could not know of the time that an article spent, awaiting the verdict of a different journal. Perhaps that is why journals prefer to count their publication time from the date of resubmission: they don’t know either, if an article will ever be resubmitted. And is it fair of an author to use a journal’s peer review process to polish their article, but not actually publish with that journal? A complex area, like I said already.

Well, if all this complexity has put you in need of cheering up, then I must recommend the Journal of Universal Rejection to you. If you don’t laugh then you might cry…

Do data librarians need soft skills or technical skills? Video clips from Frankfurt book fair

Last year I was lucky enough to attend the Frankfurt book fair, and took part in a panel session for Elsevier. They have produced some lovely little video clips, for those of you who weren’t there. Take a look at the clips, listed below: if you have time for just one, then I recommend that you watch Noelle’s summary (clip no. 6).

01 Dr. Heiner Schnelling on his “Library Dream Team

02 Dr. Heiner Schnelling on traditional library skills in the future

03 Jenny Delasalle & Heiner Schnelling on engaging researchers

04 Jenny Delasalle on skills to manage data

05 Claus Grossmann on Elsevier content solutions

06 Noelle Gracy on whether technical skills trump soft skills

Publish then publicise & monitor. Publication is not the end of the process!

Once your journal article or research output has been accepted and published, there are lots of things that you can do to spread the word about it. This blogpost has my own list of the top four ways you could do this (other than putting it on your CV, of course). I also recommend any biologists or visual thinkers to look at:
Lobet, Guillaume (2014): Science Valorisation. figsharehttp://dx.doi.org/10.6084/m9.figshare.1057995
Lobet describes the process as “publish: identify yourself: communicate”, and points out useful tools along the way, including recommending that authors identify themselves in ORCID, ResearchGate, Academia.edu, ImpactStory and LinkedIn. (Such services can create a kind of online, public CV and my favourite for researchers is ORCID.) You may also find that your publisher offers advice on ways to publicise your paper further.

PUBLICISE

1) Talk about it! Share your findings formally at a conference. Mention it in conversations with your peers. Include it in your teaching.

2) Tweet about it! If you’re not on Twitter yourself (or even if you are!) then you could ask a colleague to tweet about it for you. A co-author or the journal editor or publisher might tweet about it, or you could approach a University press officer. If you tweet yourself then you could pin the tweet about your latest paper to your profile on Twitter.

3) Open it up! Add your paper to at least one Open Access repository, such as your institutional repository (they might also tweet about it). This way your paper will be available even to those who don’t subsribe to the journal. You can find an OA repository on ROAR or OpenDOAR. Each repository will have its own community of visitors and ways in which to help people discover your content, so you might choose more than one repository: perhaps one for your paper and one for data or other material associated with it. If you put an object into Figshare, for example, it will be assigned a DOI and that will be really handy for getting Altmetrics measures.

4)Be social! Twitter is one way to do this already, of course. but you could also blog about it, on your own blog or perhaps as a guest post for an existing blog with a large audience already. You could put visual content like slides and infographics into Slideshare, and send out an update via LinkedIn. Choose at least one more social media channel of your choice, for each paper.

MONITOR

  1. Watch download stats for your paper, on your publisher’s website. Measuring the success of casual mentions is difficult, but you can often see a spike in download statistics for a paper, after it has been mentioned at a conference.
  2. Watch Twitter analytics: is your tweet about your paper one of your Top Tweets? You can see how many “engagements” a tweet has, i.e., how many clicks, favourites, re-tweets and replies, etc it accrued. If you use a link shortening service, you should also be able to see how many clicks there have been on your link, and where from. (bit.ly is one of many such shortening services.) This is the measure that I value most. If no-one is clicking to look at your content, then perhaps Twitter is not working for you and you could investigate why not or focus on more efficient channels.
  3. Repositories will often offer you stats about downloads, just like your publisher, and either or both may offer you access to an altmetrics tool. Take a look at these to see more information behind the numbers: who is interested and engaged with your work and how can you use this knowledge? Perhaps it will help you to choose which of the other possible social media channels you might use, as this is where there are others in your discipline who are already engaged with your work.

 

Ultimately, you might be interested in citations rather than engagements on Twitter or even webpage visits or downloads for your paper. It’s hard to draw a definite connection between such online activity and citations for journal papers, but I’m pretty sure that no-one is going to cite your paper if they don’t even know it exists, so if this is important to you, then I would say, shout loud!