Learning about Swiss libraries

pretty spire and buildings, with blue sky in Zurich

Last week I was privileged to be a speaker at the Library Connect event in Zurich. I was talking about research impact metrics and presented the handy cards/poster that I worked on, but my brief was to run a workshop so I didn’t talk too much! I said why I think that bibliometrics are part of the librarian’s domain and summarised the FWCI: then it was on to our workshop discussions. I was really glad to hear more from the attendees about their experiences, and I think it was a real strength of the event that librarians got to talk to each other.

participants around a coffee table, with lots of paperson it.
Workshopping!

I’ve been to the Nordic Library Connect event in the past, but what was really nice about the Swiss one, was that we had researchers as well as librarians there, and the setting was nice and informal so we had lots of conversations in the breaks, as well as in the workshop itself. Whereas most of the Scandinavian librarians were from large central university libraries, at Zurich there were more librarians from smaller departmental and embedded libraries. I get the impression that in the German speaking areas in general, the departmental libraries are more common than in the UK and Scandinavia.

Departmental librarians have slightly different concerns, reflecting the needs of the particular subject community they serve. I chatted (in my clunky German!) with two librarians from the University of Zurich Economics department library, who reminded me of the importance of working papers amongst their community. And it was interesting to hear perspectives from CERN, where they have excellent data about their publications and of course the arXiv resource. I’ve also learnt that ETH Zurich has a library service called “Lib4RI” that serves four scientific research institutes.

I was really pleased to see Dr Oliver Renn of again, who had been a speaker at the Stockholm event. His library (or “Infozentrum“) really seems to have good links with his department, and I can highly recommend a special edition of their newsletter, which presents various attitudes towards bibliometrics. The ETH Department of Chemistry and Applied Biosciences use the Altmetric donut so that their researchers can see who is giving their outputs attention, and they are working with Kudos for promotion of their science.

Charon stands before a projected slide, with notes in hand - smiling!
Charon in action!

A highlight of the day for me, was Charon Duermeijer talking about research ethics and prompting us all to think about our role in supporting researchers with such matters. I highly recommend her as a speaker because she interacts with the audience, asking us questions, and her slides have real substance. I’m sure that she’ll be sharing her slides, so you can get something of  a feel for her talk, but her passion and anecdotes will be missing so catch her if you can, at another event.

And if you get a chance to visit Zurich, then I highly recommend it!

blue lake with blue sky above and a jetty protruding into the lake. On the horizon are mountains, some capped with snow.

Bibliometrics and the academic librarian

Next week I’m going to be at another Elsevier Connect event, this time in Zurich (you can still register if you want to join in!). These events are usually attended by librarians who are not bibliometricians, and often there are bibliometric specialists elsewhere in their libraries or universities. But I think that there’s a need for librarians of many kinds to develop an understanding of bibliometrics and I look forward to discussing more with attendees about bibliometrics use they’ve come across, and what they think that librarians can contribute to the bibliometrics community. Here are some of my thoughts on the topic.

The field of bibliometrics seems to me to be growing: there are ever more studies being published. The knowledge and skills of these academic experts often seems intimidating to me as a practitioner and librarian. There are new developments all the time, which can make it seem hard to keep uptodate, such as a recent initiative to open citation data, via Crossref.

Meanwhile, I notice ever more job advertisements for new kinds of roles in library services or university administration, such as: “Bibliometric specialist”, “Bibliometrician” or perhaps a role related to research impact, which involves using bibliometric (as well as altmetric) data and tools. These are jobs for people who are used to handling huge amounts of data and applying sophisticated analysis techniques to create reports. Expertise with mathematical and statistical methods is required: such was never a part of my training and I feel left behind, but I don’t see that as a problem.

I’ve come to bibliometrics through a rather winding route and I’m interested in a lot more than just bibliometrics: I like watching many developments in the world of scholarly communication such as open access and open science, but also developments in peer review and so on: if you browse this blog you’ll get a flavour!

I have no intention of specialising in bibliometrics nor of spending my days producing bibliometric analyses: I’m simply not the best person to be doing that kind of work. Is there a role for someone like me (an ordinary librarian rather than specialist bibliometrician), within the bibliometrics community? I think so…

In my view, great librarians are able to connect people with the information that they need: I take this, of course, from Ranganathan’s laws. We might do this behind the scenes through collection management which enables independent discovery, or in person, through a traditional enquiry or reference interview. (For illustration and entertainment, if you haven’t seen this helpdesk video then I highly recommend it!)

In the university setting, the resources that we offer as part of the library collection are being used to generate and to provide bibliometric data and measures. It has sometimes been part of libraries’ collection management decisions, which sources of such data are added to the collection. And indeed bibliometric scores like the impact factor might influence journal acquisition or cancellation decisions – although there are many factors to be used for evaluating journals.

Library users include researchers and scholars who are increasingly aware of and concerned about bibliometric scores, and in my view many could use some support. Of course, some researchers will find an interest in bibliometric research and learn way more than I ever could about it all. However, other researchers, while perfectly able to understand bibliometrics research simply have other priorities, and yet others will not have had mathematics and statistics training and so will find bibliometric scores no easier to understand than a librarian like myself.

And this is why I think that the ordinary librarian should remain involved in the bibliometrics scene: if we can understand bibliometric measures and significant developments in the field then not only will we be able to pass knowledge on to our user community, but it is also a sign that such measures can be understood by all academics who might need to understand them.

A scholarly field grows when the experts develop ever more sophisticated methods, and I am no scholar of bibliometrics so it’s fine that I am left behind. But bibliometrics are being used in the real world, as part of national research evaluation exercises, in university ranking schemes and indeed within author online profiles. Academic librarians know both the people involved and the people affected by such developments: we are central to universities, and can act as links, bridging the specialists who do bibliometric analyses for a university and the scholars whose careers are affected.

So the intelligent lay person, the library practitioner’s perspective is a valuable one for the bibliometrics community: if we understand the measures then others will be able to, and we can help to spread the message about how such measures are being used.

I look forward to discussing more with the librarians who are coming to Zurich…

 

Snowy Stockholm and Nordic Librarians!

Picture from Twitter
Picture from Twitter @Micha2508

Last week I attended Elsevier’s Nordic Library Connect event in Stockholm, Sweden. I presented the metrics poster / card and slide set that I researched for Elsevier already. It’s a great poster but the entire set of metrics take some digesting. Presenting them all as slides in around 30 minutes was not my best idea, even for an audience of librarians! The poster itself was popular though, as it is useful to keep on the wall somewhere to refer to, to refresh your knowledge of certain metrics:

https://libraryconnect.elsevier.com/sites/default/files/ELS_LC_metrics_poster_V2.0_researcher_2016.pdf

I reflected after my talk that I should probably have chosen a few of the metrics to present, and then added more information and context, such as screen captures of where to find these metrics in the wild. It was a very useful experience, not least because it gave me this idea, but also because I got to meet some lovely folks who work in libraries in the Scandinavian countries.

UPDATE 23 Nov 2016: now you can watch a video of my talk (or one of the others) online.

I met these guys... but also real people!
I met these guys… but also real people!

I particularly valued a presentation from fellow speaker, Oliver Renn of ETH, Zurich. He has obviously built up a fantastic relationship with the departments that his library serves. I thought that the menus he offered were inspired. These are explained in the magazine that he also produces for his departments: see p8 of this 2015 edition.

See tweets from the event by clicking on the hashtag in this tweet:

 

Explaining the g-index: trying to keep it simple

For many years now, I’ve had a good grip on what the h-index is all about: if you would like to follow this blogpost all about the g-index, then please make sure that you already understand the h-index. I’ve recently had a story published with Library Connect, which elaborates on my user-friendly description of the h-index. There are now many similar measures to the h-index, some of which are simple to understand like the i10-index, which is just the number of papers you have published which have had 10 or more citations. Others are more difficult to understand, because they attempt to something more sophisticated, and perhaps they actually do a better job than the h-index alone: it is probably wise to use a few of them in combination, depending on your purpose and your understanding of the metrics. If you enjoy getting to grips with all of these measures then there’s a paper reviewing 108 author-level bibliometric indicators which will be right up your street!

If you don’t enjoy these metrics so much but feel that you should try to understand them better, and you’re struggling, then perhaps this blogpost is for you! I won’t even think about looking at the algorithms behind Google PageRank inspired metrics, but the g-index is one metric that even professionals who are not mathematically minded can understand. For me, understanding the g-index began with the excellent Publish or Perish website and book, but even this left me frowning. Wikipedia’s entry was completely unhelpful to me, I might add.

In preparation for a recent webinar on metrics, I redoubled my efforts to get the g-index into a manageable explanation. On the advice of my co-presenter from the webinar, Andrew Plume, I went back to the original paper which proposed the g-index: Egghe, L., “Theory and practice of the G-index”. Scientometrics, vol. 69, no. 1, (2006), pp. 131–152

Sadly, I could not find an open access version, and even when I read this paper, it is peppered with precisely the sort of formulae that make librarians like me want to run a mile in the opposite direction! However, I found a way to present the g-index at that webinar, which built nicely on my explanation of the h-index. Or so I thought! Follow-up questions from the webinar showed where I had left gaps in my explanation and so this blogpost is my second attempt to explain the g-index in a way that leaves no room for puzzlement.

I’ll begin with my slide from the webinar:

g-index

 

I read out the description at the top of the table, which seems to make sense to me. I explained that I needed the four columns to calculate the g-index, reading off the titles of each column. I explained that in this instance, the g-index would be 6… but I neglected to say that this is because this is the last row on my table where the total number of citations (my right hand column) is higher than or equal to the square of g.

Why did I not say this? Because I was so busy trying to explain that we can forget about the documents that have had no citations… oh dear! (More on those “zero cites” papers later.) In my defence, this is exactly the same as saying that the citations received altogether must be at least g squared, but when presenting something that is meant to be de-mystifying, the more descriptions, the better! So, again: the g-index in my table above is the document number (g) where the total number of citations is greater than or equal to the square of g (also known as g squared).

Also on reflection, for the rows where there were “0 cites” I should also have written “does not count” instead of “93” in the “Total number of citations” column, as people naturally asked afterwards why the g-index of my Professor X was not 9. In my presentation I had tried to explain what would happen if the documents with 0 citations had actually had a citation each, which would have yielded a g-index of 9, but I was not clear enough. I should have had a second slide to show this:

extra g-index

Here we can see that the g-index would be 9 because the 9th row has the total number of citations as higher than g squared, but in the 10th row the total number of citations are less than g squared.

My “0 cites” was something of a complication and a red herring, and yet it is also a crucial concept. Because there are many, many papers out there with 0 citations, and so there will be many researchers with papers that have 0 citations.

I also found, when I went back to that original paper by Egghe, that it has a “Note added in proof” which describes a variant where papers with zero citations, or indeed fictitious papers are included in the calculation, in order to provide a higher g-index score. However I have not used the variant. In the original paper Egghe refers to “T” which is the total number of documents, or as he described it “the total number of ever cited papers”. Documents that have never been cited cannot be part of “T” and that’s why my explanation of the g-index excludes those documents with 0 citations. I believe that Egghe used this as a feature of the h-index which he valued, i.e. representing the most highly cited papers in the single number, which is why I did not use the variant.

However, others have used the variant in their descriptions of the g-index and the way they have calculated it in their papers, especially in more recent papers that I’ve come across, so this confuses our understanding of exactly what the g-index is. Perhaps that’s why the Wikipedia entry talks about an “average” because the inclusion of fictitious papers does seem to me more like calculating an average. No wonder it took me such a long time to feel that I understood this metric satisfactorily!

My advice is: whenever you read about a g-index in future, be sure that you understand what is included in “T“, i.e. which documents qualify to be included in the calculation. There are at least three possibilities:

  1. Documents that have been cited.
  2. Documents that have been published but may or may not have been cited.
  3. Entirely fictitious documents that have never been published and act as a kind of “filler” for rows in our table to help us see which “g squared” is closest to the total number of citations!

I say “at least” because of course these documents are the ones in the data set that you are using, and there will also be variability there: from one data set to another and over time, as data sets get updated. In many ways, this is no different from other bibliometric measures: understanding which documents and citations are counted is crucial to understanding the measure.

Do I think that we should use the variant or not? In Egghe’s Note, he pointed out that it made no difference to the key finding of his paper which explored the works of prestigious authors. I think that in my example, if we want to do Professor X justice for the relatively highly cited article with 50 cites, then we would spread the total of citations out across the documents with zero citations and allow him a g-index of 9. That is also what the g-index was invented to do, to allow more credit for highly cited articles. However, I’m not a fan of counting fictitious documents. So I would prefer that we stick to a g-index where “T” is “all documents that have been published and which exist in the data set, whether or not they have been cited.” So not my possibility no. 1 which is how I actually described the g-index, and not my possibility no. 3 which is how I think Wikipedia is describing it. This is just my opinion, though… and I’m a librarian rather than a bibliometrician, so I can only go back to the literature and keep reading.

One final thought: why do librarians need to understand the g-index anyway? It’s not all that well used, so perhaps it’s not necessary to understand it. And yet, knowledge and understanding of some of the alternatives to the h-index and what they are hoping to reflect will help to ensure that you and the people who you advise, be they researchers or university administrators, will all use the h-index appropriately – i.e. not on its own!

Note: the slides have been corrected since this blogpost was first published. Thanks to the reader who helped me out by spotting my typo for the square of 9!

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

12 reasons scholars might cite: citation motivations

I’m sure I read something similar about this once,  and then couldn’t find it again lately… so here is my quick list of reasons why researchers might cite. It includes “good” and “bad” motivations, and might be useful when considering bibliometric indicators. Feel free to comment on this post and suggest more possible motivations. Or indeed any good sources!

  1. Set own work in context
  2. Pay homage to experts
  3. Give credit to peers
  4. Criticise/correct previous work (own or others)
  5. Signposting under noticed work
  6. Provide further background reading
  7. Lend weight to own claims
  8. Self citations to boost own bibliometric scores and/or signpost own work
  9. Boost citations of others as part of an agreement
  10. Gain favour with journal editor or possible peer reviewers by citing their work
  11. Gain favour by citing other papers in the journal of choice for publication
  12. Demonstrate own wide reading/knowledge

Keeping up to date with bibliometrics: the latest functions on Journal Citation Reports (InCites)

I recently registered for a recent free, live, online training session on the latest functions of Journal Citation Reports (JCR) on InCites, from Thomson Reuters (TR). I got called away during the session, but the great thing is that they e-mail you a copy so you can catch up later. You can’t ask questions, but at least you don’t miss out entirely! If you want to take part in a session yourself, then take a look at the Web of Science training page. Or just read here to find out what I picked up and reflected on.

At the very end of the session, we learnt that 39 journal titles have been supressed in the latest edition. I mention it first because I think it is fascinating to see how journals go in and out of the JCR collection, since having a JCR impact factor at all is sometimes seen as a sign of quality. These supressed titles are suspended and their editors are informed why, but it is apparently because of either a high self-cite rate, or something called “stacking”, whereby two journals are found to be citing each other in such a way that they significantly influence the latest impact factor calculations. Journals can come out of suspension, and indeed new journals are also added to JCR from year to year. Here are the details of the JCR selection process.

The training session began with a look at Web of Science: they’ve made it easier to see JCR data when you’re looking at the results of a Web of Science search, by clicking on the journal title: it’s good to see this link between TR products.

Within JCR, I like the visualisation that you get when you choose a subject category to explore: this tells you how many journals are in that category and you can tell the high impact factor journals because they have larger circles on the visualisation. What I particularly like though, is the lines joining the journals: the thicker the line, the stronger the citing relationship between the journals joined by that line.

It is the librarian in me that likes to see that visualisation: you can see how you might get demand for journals that cite each other, and thus get clues about how to manage your collection. The journal profile data that you can explore in detail for an individual journal (or compare journal titles) must also be interesting to anyone managing a journal, or indeed to authors considering submitting to a journal. You can look at a journal’s performance over time and ask yourself “is it on the way up?” You can get similar graphs on SJR, of course, based on Elsevier’s Scopus data and available for free, but there are not quite so many different scores on SJR as on JCR.

On JCR, for each journal there are new “indicators”, or measures/scores/metrics that you can explore. I counted 13 different types of scores. You can also explore more of the data behind the indicators presented than you used to be able to on JCR.

One of the new indicators is the “JIF percentile”. This is apparently introduced because the quartile information is not granular or meaningful enough: there could be lots of journals in the same quartile for that subject category. I liked the normalised Eigenfactor score in the sense that the number has meaning at first glance: higher than 1 means higher than average, which is more meaningful than a standard impact factor (IF). (The Eigenfactor is based on JCR data but not calculated by TR. You can find out more about it at Eigenfactor.org, where you can also explore slightly older data and different scores, for free.)

If you want to explore more about JCR without signing up for a training session, then you could explore their short video tutorials and you can read more about the updates in the JCR Help file.

Peer review motivations and measurement

Yesterday’s blogpost by David Crotty on Scholarly Kitchen, outlines the problems with the notion of giving credit for peer review. It is very thought provoking, although I’m personally still keen to see peer review done in the open, and to explore the notion of credit for peer review some more. For me the real question is not whether to measure it, but how best to measure it and what value to set on that measure.

Both the blogpost and its comments discuss researchers’ current motivation for carrying out peer review:

  • To serve the community & advance the field (altruism?)
  • To learn what’s new in the field (& learn before it is published, i.e. before others!)
  • To impress editors/publishers (& thereby increase own chances of publication)
  • To contribute to a system in which their own papers will also benefit (self interest?)

Crotty writes that problems in peer review would arise from behavioural change amongst researchers if we change their motivation such that they will chase credit points. He poses some very interesting questions, including:

How much career credit should a researcher really expect to get for performing peer review?

I think that’s a great question! However, I do think that we should investigate potential ways to give credit for peer review. I’ve previously blogged about the problems with peer review and followed up on those thoughts and I’ve no doubt that I’ll continue to give this space more thought: peer review is about quality, and as a librarian at heart, I’m keen that we have good quality information available as widely as possible.

In David Crotty’s post I am particularly concerned by the notion that researchers, as currently intrinsically motivated, will be prepared to take on higher workloads. I don’t want that for researchers: they are already under enormous amounts of pressure. Not all academics can work all waking hours. Some actually do (at least some of the time), I know, but presumably someone else cleans and cooks for them (wives? paid staff?), and even if all researchers had someone to do that for them, it’s not fair to the researchers or even good for academia, to comprise such isolated individuals.

One commenter makes the point that all peer reviews are not alike and that some might take a day, some 20 minutes, so if credit is to be given along the lines of how many reviews someone has carried out, well this won’t be quite fair. And yet, as Crotty argued in his blogpost, if you complicate your measurement then it’s really overkill because no-one really cares to know more than a simple count. Perhaps that’s a part of what needs fixing with peer review: a little more uniformity of practice. Is it fair to the younger journals (probably with papers from early career researchers who don’t trust themselves to submit to the journal giants) that they get comparatively cursory time from peer reviewers?

Another comment mentions that the current system favours free riding: not everyone carries out peer review, even though everyone benefits from the system. The counterpoint to this is in another comment which points out that there is already a de facto system of credit, in that journal editors are aware of who is carrying out peer review, and they wield real power, reviewing papers and sitting on funding panels. I’m not sure that I’d want to rely on a busy editor’s memory to get the credit I deserved, but the idea reminded me of how the peer review system has worked up until now, and the issue seems to be that the expanding, increasingly international research and publishing community is no longer as close-knit as it once was.

There is a broader issue here. Crotty suggested that university administrators would not want researchers to take the time to do peer review, but to do original research all the time since that’s what brings in the money and the glory. But in order to be a good researcher (and pull in the grant funding), one has to read others’ papers, and be aware of the direction of research in the field. Plus, review papers are often more highly cited than original research papers, so surely those administrators will want researchers who produce review papers and pull in the citations? Uni rankings often use bibliometric data, and administrators do care about those!

What we’re really talking about, is ‘how to measure researchers’ performance’, and perhaps peer review (if openly measured) is a part of that but perhaps also not. I like the notion of some academics becoming expert peer reviewers, whilst others are expert department/lab leaders or grant writers, or authors or even teachers. We all have different strengths and perhaps it’s not realistic to expect all of our researchers to do everything, but if you want a mixture in your team then you need to know who is doing what.

I’d like to finish with Kent Anderson’s thoughtful comment about retaining excellent reviewers:

Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer).

Let’s think on it some more…

Alerts are really helpful and (alt)metrics are interesting but academic communities are key to building new knowledge.

Some time ago, I set Google Scholar to alert me if anyone cited one of the papers I’ve authored. I recommend that academic authors should do this on Scopus and Web of Science too. I forgot all about it until yesterday, when an alert duly popped into my e-mail.

It is gratifying to see that someone has cited you (& perhaps an occasional reminder to update the h-index on your CV), but more importantly, it alerts you to papers in your area of interest. This is the paper I was alerted to:

José Luis Ortega (2015) Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC’s members. Journal of Informetrics, Volume 9, Issue 1, Pages 39-49 doi: 10.1016/j.joi.2014.11.004

I don’t have a subscription to ScienceDirect so I couldn’t read it in full there, but it does tell me the research highlights:

  • Usage and social indicators depend on their own social sites.
  • Bibliometric indices are independent and therefore more stable across services.
  • Correlations between social and usage metrics regarding bibliometric ones are poor.
  • Altmetrics could not be a proxy for research evaluation, but for social impact of science.

and of course, the abstract:

This study explores the connections between social and usage metrics (altmetrics) and bibliometric indicators at the author level. It studies to what extent these indicators, gained from academic sites, can provide a proxy for research impact. Close to 10,000 author profiles belonging to the Spanish National Research Council were extracted from the principal scholarly social sites: ResearchGate, Academia.edu and Mendeley and academic search engines: Microsoft Academic Search and Google Scholar Citations. Results describe little overlapping between sites because most of the researchers only manage one profile (72%). Correlations point out that there is scant relationship between altmetric and bibliometric indicators at author level. This is due to the almetric ones are site-dependent, while the bibliometric ones are more stable across web sites. It is concluded that altmetrics could reflect an alternative dimension of the research performance, close, perhaps, to science popularization and networking abilities, but far from citation impact.

I found a fuller version of the paper on Academia.edu and it is indeed an interesting read. I’ve read other papers that look specifically at altmetric and bibliometric scores for one particular journal’s articles, or articles from within one discipline. I like the larger scale of this study, and the conclusions make sense to me.

And my paper that it cites? A co-authored one that Brian Kelly presented at the Open Repositories 2012 conference.

Kelly, B., & Delasalle, J. (2012). Can LinkedIn and Academia.edu Enhance Access to Open Repositories? In: OR2012: the 7th International Conference on OpenRepositories, Edinburgh, Scotland.

It is also a paper that is on Academia.edu. I wonder if that’s partly why it was discovered and cited? The alt- and biblio-metrics for that paper are not likely to be high (I think of it as embryonic work, for others to build on), but participation in an online community is still a way to spread the word about what you’ve investigated and found, just like attending a conference.

Hence the title of this blog post. I find the alert useful to keep my knowledge up to date, and the citation gives me a sense of being part of the academic community, which is why I find metrics so interesting. What they tell the authors themselves is of value, beyond any performance measurement or quality signalling aspects.

12 Questions to ask, for basic clues on the quality of a journal

When choosing where to publish a journal article, what signs do you look out for? Here are some questions to ask or aspects to investigate, for clues.

1 – Is it peer reviewed? (Y/N and every nuance in between) See the journal’s website.
2- Who is involved in it? The editor & publisher? Are they well known & well thought of? Who has published articles there already: are these big players in your field? Read the journal!
3- Is it abstracted/indexed by one of the big sources in your field? (The journal’s website should tell you this. Big publishers also offer their own databases of house journals)
4- What happens when you search on Google for an article from the journal? Do you get the article in the top few results? And on GScholar?
5- Does it appear in Web of Science or Scopus journal rankings?
6- Take a look on COPAC: which big research libraries subscribe?
7- have a look at the UK’s published RAE2008 / forthcoming REF2014 data and see if articles from that journal were a part of the evidence submitted, and rated as 4*
8- Do the journal articles have DOIs? This is a really useful feature for promotion of your article, and it will mean that altmetric tools can provide you with evidence of engagement with your article.
9- Is there an open access option? (See SherpaRomeo) This is a requirement of many research funders, but it is also useful for you, when you want to promote your article.
10- Is it on the list of predatory OA journals? You might want to avoid those, although check for yourself. Note that some journals on the list are disputed/defended against the accusation of predation!
11- Is it listed on the ISSN centre’s ROAD: http://road.issn.org/ What does this tell you about it?
12- If you have access through a library subscription, is it listed on Ulrich’s periodicals directory? What does this tell you about it? Note the “peer review” symbol of a striped referee’s shirt: if the shirt is not there, it doesn’t necessarily mean that the journal is not peer reviewed: you may have to investigate further.
FURTHER NUANCES…
– What type of peer review is used? Is it rigorous? Is it useful to you, even if you get rejected?
– Time to rejection/acceptance: how soon do you need to be published?
– Acceptance/rejection rate
– Journal Impact Factor/ SJR score(s) /quartile for the field