Not enough time for reading in academia: can we measure it?

I wanted to explore a topic which has been popular on Twitter, at least amongst the tweets I saw over the summer: that of academics struggling to find the time to read. I’ve written this blogpost in something of a “summer exploration” spirit, since I connected this topic with my interest in bibliometrics.

During the summer there were many  mentions of the importance of reading in academia, on Twitter. Reading of any kind is important for training our minds to think. It’s important for training our own ability with words, our writing skills. And it’s important for keeping uptodate with academic discoveries and developments in fields of interest, to name but a few advantages of reading. Pat Thomson is eloquent on the matter.

As a librarian by background, of course I’m a big fan of reading! But I see how pressure on scholars and researchers to publish, to bring in research grants and to contribute to other activities that are measured in performance evaluations and university rankings might actually be causing them to read less. I may be doing researchers a disservice to suggest that they are reading less, but I’m being sympathetic. Carol Tenopir’s 2014 research into reading via questionnaires and academics’ self-reporting is outlined on the Scholarly Kitchen blog: at first it did look like there was a decline in reading, but in the end the research might only indicate that a plateau was reached, at a time when the volume of content being published is increasing. This might make some scholars feel that they are unable to keep up with their field.

My provocative thought goes like this: If focussing on publication outputs and measuring them via bibliometrics has led to a lack of reading time (which I’m a long way off proving), then perhaps the solution is to also measure (and give credit for) time invested in reading!

Disciplinary differences are at the core of academic reading habits, evidenced by studies of library impact on students, among others. Such studies have involved attempts to correlate student grades with library accesses, as explored in this 2015 paper :

Here there is some correlation of “quality” academic performance and library accesses, although the main conclusion seems to be the importance of the library when it comes to student retention. I also remember reading Graham Stone’s earlier work (cited in the paper above), and the importance of data protection issues. These studies identify cohorts of students rather than individuals and their grades due to ethical (and legal) concerns which apply when it comes to researchers, too.

We must also remember that much content is not digital, or not in the library, whether physical or online. Increasingly, scholarly content is available online via open access, so we don’t need to be identifiably logged in to read it. And indeed, Tenopir’s later work reminds us that content once downloaded can be re-read or shared, outside of the publisher or library platforms. Automatically measuring reading to any degree of accuracy becomes possible only if you dictate how and where academic reading is to be done. Ethical concerns abound!

Instead of measuring time spent reading or volumes of content downloaded or accessed by researchers, perhaps we could give credit to researchers who cite more. After all, citations are an indication that the authors have read a paper, aren’t they? OK, I am being prococative again: how do we know which co-authors have read which of the cited papers? How do we know that a cited paper is one that has been read in full: what if the pre-print has been read rather than the version of record, or only the abstract? Such doubts about what it means to read a paper are expressed in the comments of the Scholarly Kitchen post mentioned earlier.

Actually, we could say that reading and citations are already indirectly assessed, because we evaluate written outputs and publications, and their quality reflects the amount and quality of reading behind them. I think that’ll have to do, because the more I read about academic reading, the more I think we can’t know! How we evaluate the outputs is another matter, of course. I’ve blogged about peer review, but not article level metrics – yet.

I tried to track down Tenopir’s published paper, based on the self-reported questionnaire research critiqued on the Scholarly Kitchen. I think it must be the paper entitled “Scholarly article seeking, reading, and use: a continuing evolution from print to electronic in the sciences and social sciences” The critiquing all occurred before the paper was published, so direct links weren’t provided. Research into how much researchers are reading, whether based on downloads or questionnaires can illustrate disciplinary differences, or signal changes in research practice over time. Tenopir and her co-authors shed light on this, and opened more questions to be answered. I wonder whether researchers could be persuaded to allow tracking software to spy on their reading habits for a limited period… there is much more to be explored in this area but I’m sure that we won’t gain trust by suggesting reading metrics!

Image credit: CC0 Pixabay.

 

Advertisements

Learning about Swiss libraries

pretty spire and buildings, with blue sky in Zurich

Last week I was privileged to be a speaker at the Library Connect event in Zurich. I was talking about research impact metrics and presented the handy cards/poster that I worked on, but my brief was to run a workshop so I didn’t talk too much! I said why I think that bibliometrics are part of the librarian’s domain and summarised the FWCI: then it was on to our workshop discussions. I was really glad to hear more from the attendees about their experiences, and I think it was a real strength of the event that librarians got to talk to each other.

participants around a coffee table, with lots of paperson it.
Workshopping!

I’ve been to the Nordic Library Connect event in the past, but what was really nice about the Swiss one, was that we had researchers as well as librarians there, and the setting was nice and informal so we had lots of conversations in the breaks, as well as in the workshop itself. Whereas most of the Scandinavian librarians were from large central university libraries, at Zurich there were more librarians from smaller departmental and embedded libraries. I get the impression that in the German speaking areas in general, the departmental libraries are more common than in the UK and Scandinavia.

Departmental librarians have slightly different concerns, reflecting the needs of the particular subject community they serve. I chatted (in my clunky German!) with two librarians from the University of Zurich Economics department library, who reminded me of the importance of working papers amongst their community. And it was interesting to hear perspectives from CERN, where they have excellent data about their publications and of course the arXiv resource. I’ve also learnt that ETH Zurich has a library service called “Lib4RI” that serves four scientific research institutes.

I was really pleased to see Dr Oliver Renn of again, who had been a speaker at the Stockholm event. His library (or “Infozentrum“) really seems to have good links with his department, and I can highly recommend a special edition of their newsletter, which presents various attitudes towards bibliometrics. The ETH Department of Chemistry and Applied Biosciences use the Altmetric donut so that their researchers can see who is giving their outputs attention, and they are working with Kudos for promotion of their science.

Charon stands before a projected slide, with notes in hand - smiling!
Charon in action!

A highlight of the day for me, was Charon Duermeijer talking about research ethics and prompting us all to think about our role in supporting researchers with such matters. I highly recommend her as a speaker because she interacts with the audience, asking us questions, and her slides have real substance. I’m sure that she’ll be sharing her slides, so you can get something of  a feel for her talk, but her passion and anecdotes will be missing so catch her if you can, at another event.

And if you get a chance to visit Zurich, then I highly recommend it!

blue lake with blue sky above and a jetty protruding into the lake. On the horizon are mountains, some capped with snow.

Bibliometrics and the academic librarian

Next week I’m going to be at another Elsevier Connect event, this time in Zurich (you can still register if you want to join in!). These events are usually attended by librarians who are not bibliometricians, and often there are bibliometric specialists elsewhere in their libraries or universities. But I think that there’s a need for librarians of many kinds to develop an understanding of bibliometrics and I look forward to discussing more with attendees about bibliometrics use they’ve come across, and what they think that librarians can contribute to the bibliometrics community. Here are some of my thoughts on the topic.

The field of bibliometrics seems to me to be growing: there are ever more studies being published. The knowledge and skills of these academic experts often seems intimidating to me as a practitioner and librarian. There are new developments all the time, which can make it seem hard to keep uptodate, such as a recent initiative to open citation data, via Crossref.

Meanwhile, I notice ever more job advertisements for new kinds of roles in library services or university administration, such as: “Bibliometric specialist”, “Bibliometrician” or perhaps a role related to research impact, which involves using bibliometric (as well as altmetric) data and tools. These are jobs for people who are used to handling huge amounts of data and applying sophisticated analysis techniques to create reports. Expertise with mathematical and statistical methods is required: such was never a part of my training and I feel left behind, but I don’t see that as a problem.

I’ve come to bibliometrics through a rather winding route and I’m interested in a lot more than just bibliometrics: I like watching many developments in the world of scholarly communication such as open access and open science, but also developments in peer review and so on: if you browse this blog you’ll get a flavour!

I have no intention of specialising in bibliometrics nor of spending my days producing bibliometric analyses: I’m simply not the best person to be doing that kind of work. Is there a role for someone like me (an ordinary librarian rather than specialist bibliometrician), within the bibliometrics community? I think so…

In my view, great librarians are able to connect people with the information that they need: I take this, of course, from Ranganathan’s laws. We might do this behind the scenes through collection management which enables independent discovery, or in person, through a traditional enquiry or reference interview. (For illustration and entertainment, if you haven’t seen this helpdesk video then I highly recommend it!)

In the university setting, the resources that we offer as part of the library collection are being used to generate and to provide bibliometric data and measures. It has sometimes been part of libraries’ collection management decisions, which sources of such data are added to the collection. And indeed bibliometric scores like the impact factor might influence journal acquisition or cancellation decisions – although there are many factors to be used for evaluating journals.

Library users include researchers and scholars who are increasingly aware of and concerned about bibliometric scores, and in my view many could use some support. Of course, some researchers will find an interest in bibliometric research and learn way more than I ever could about it all. However, other researchers, while perfectly able to understand bibliometrics research simply have other priorities, and yet others will not have had mathematics and statistics training and so will find bibliometric scores no easier to understand than a librarian like myself.

And this is why I think that the ordinary librarian should remain involved in the bibliometrics scene: if we can understand bibliometric measures and significant developments in the field then not only will we be able to pass knowledge on to our user community, but it is also a sign that such measures can be understood by all academics who might need to understand them.

A scholarly field grows when the experts develop ever more sophisticated methods, and I am no scholar of bibliometrics so it’s fine that I am left behind. But bibliometrics are being used in the real world, as part of national research evaluation exercises, in university ranking schemes and indeed within author online profiles. Academic librarians know both the people involved and the people affected by such developments: we are central to universities, and can act as links, bridging the specialists who do bibliometric analyses for a university and the scholars whose careers are affected.

So the intelligent lay person, the library practitioner’s perspective is a valuable one for the bibliometrics community: if we understand the measures then others will be able to, and we can help to spread the message about how such measures are being used.

I look forward to discussing more with the librarians who are coming to Zurich…

 

Snowy Stockholm and Nordic Librarians!

Picture from Twitter
Picture from Twitter @Micha2508

Last week I attended Elsevier’s Nordic Library Connect event in Stockholm, Sweden. I presented the metrics poster / card and slide set that I researched for Elsevier already. It’s a great poster but the entire set of metrics take some digesting. Presenting them all as slides in around 30 minutes was not my best idea, even for an audience of librarians! The poster itself was popular though, as it is useful to keep on the wall somewhere to refer to, to refresh your knowledge of certain metrics:

https://libraryconnect.elsevier.com/sites/default/files/ELS_LC_metrics_poster_V2.0_researcher_2016.pdf

I reflected after my talk that I should probably have chosen a few of the metrics to present, and then added more information and context, such as screen captures of where to find these metrics in the wild. It was a very useful experience, not least because it gave me this idea, but also because I got to meet some lovely folks who work in libraries in the Scandinavian countries.

UPDATE 23 Nov 2016: now you can watch a video of my talk (or one of the others) online.

I met these guys... but also real people!
I met these guys… but also real people!

I particularly valued a presentation from fellow speaker, Oliver Renn of ETH, Zurich. He has obviously built up a fantastic relationship with the departments that his library serves. I thought that the menus he offered were inspired. These are explained in the magazine that he also produces for his departments: see p8 of this 2015 edition.

See tweets from the event by clicking on the hashtag in this tweet:

 

Reflections and a simple round-up of Peer Review Week 2016

It has been Peer Review Week this week: I’ve been watching the hashtag on Twitter with interest (and linked to it in a blogpost for piirus.ac.uk) and on Monday I attended a webinar called “Recognising Review – New and Future Approaches or acknowledging the Peer Review Process”.

I do like webinars, as I’ve blogged before: professional development/horizon scanning from my very own desktop! This week’s one featured talks from Paperhive and Publons, amongst others, both of which have been explored on this blog in the past. I was particularly interested to hear that Publons are interested in recording not only peer review effort, but also editorial contributions. (Right at the end of the week this year, there have been suggestions that editorial work be the focus of next year’s peer review week so it seems to me that we’ve come full circle.) A question from the audience raised the prospect of a new researcher metric based on peer review tracking. I guess that’s an interesting space to watch!

I wondered where Peer Review Week came from: it seems to be a publisher initiative if Twitter is anything to go by: the hashtag is dominated by their contributions. On Twitter at least, it attracted some publisher criticism: if you deliberately look at ways to recognise peer review then some academics are going to ask whether it is right for publishers to profit so hugely from their free work. Some criticisms were painful to read and some were also highly amusing:

There were plenty of link to useful videos, webpages and infographics about how to carry out peer review, both for those new to it and for those already experienced, such as:

(On this topic, I thought that an infographic from Elsevier about reasons why reviewers refused to peer review was intriguing.)

Advice was also offered on how / how not to respond to peer reviews. My favourite:

And there were glimpses of what happens at the publisher or editor level:

There wasn’t much discussion of the issue of open vs blind or double blind peer review, which I found interesting because recognition implies openness, at least to me. And there was some interesting research reported on in the THE earlier this month, about eliminating gender bias through double blind reviews, so openness in the context of peer review is an issue that I feel torn about. Discussion on Twitter seemed to focus mostly on incentives for peer review, and I suppose recognition facilitates that too.

Peer Review Week has also seen one of the juiciest stories in scholarly communication: fake peer reviews! We’ve been able to identify so much dodgy practice in the digital age, from fake papers and fake authors to fake email addresses so that you can be your own peer reviewer and citation rings. Some of this is, on one level, highly amusing: papers by Maggie Simpson, or a co-author who is, in fact your cat. But on another level it is also deeply concerning, and so it’s a space that will continue to fascinate me because it definitely looks like a broken system: how do we stick it all together?

A useful tool for librarians: metrics knowledge in bite-sized pieces By Jenny Delasalle

Here is a guest blogpost that I wrote for the new, very interesting Bibliomagician blog.

The Bibliomagician

Metrics_poster_verticalHaving worked in UK academic libraries for 15 years before becoming freelance, I saw the rise and rise of citation counting (although as Geoffrey Bilder points out, it should rightly be called reference counting). Such counting, I learnt, was called “bibliometrics”. The very name sounds like something that librarians should be interested in if not expert at, and so I delved into what they were and how they might help me and also the users of academic libraries. It began with the need to select which journals to subscribe to, and it became a filter for readers to select which papers to read. Somewhere along the road, it became a measurement of individual researchers, and a component of university rankings: such metrics were gaining attention.

Then along came altmetrics, offering tantalising glimpses of something more than the numbers: real stories of impact that could be found through online tracking. Context…

View original post 880 more words

Explaining the g-index: trying to keep it simple

For many years now, I’ve had a good grip on what the h-index is all about: if you would like to follow this blogpost all about the g-index, then please make sure that you already understand the h-index. I’ve recently had a story published with Library Connect, which elaborates on my user-friendly description of the h-index. There are now many similar measures to the h-index, some of which are simple to understand like the i10-index, which is just the number of papers you have published which have had 10 or more citations. Others are more difficult to understand, because they attempt to something more sophisticated, and perhaps they actually do a better job than the h-index alone: it is probably wise to use a few of them in combination, depending on your purpose and your understanding of the metrics. If you enjoy getting to grips with all of these measures then there’s a paper reviewing 108 author-level bibliometric indicators which will be right up your street!

If you don’t enjoy these metrics so much but feel that you should try to understand them better, and you’re struggling, then perhaps this blogpost is for you! I won’t even think about looking at the algorithms behind Google PageRank inspired metrics, but the g-index is one metric that even professionals who are not mathematically minded can understand. For me, understanding the g-index began with the excellent Publish or Perish website and book, but even this left me frowning. Wikipedia’s entry was completely unhelpful to me, I might add.

In preparation for a recent webinar on metrics, I redoubled my efforts to get the g-index into a manageable explanation. On the advice of my co-presenter from the webinar, Andrew Plume, I went back to the original paper which proposed the g-index: Egghe, L., “Theory and practice of the G-index”. Scientometrics, vol. 69, no. 1, (2006), pp. 131–152

Sadly, I could not find an open access version, and even when I read this paper, it is peppered with precisely the sort of formulae that make librarians like me want to run a mile in the opposite direction! However, I found a way to present the g-index at that webinar, which built nicely on my explanation of the h-index. Or so I thought! Follow-up questions from the webinar showed where I had left gaps in my explanation and so this blogpost is my second attempt to explain the g-index in a way that leaves no room for puzzlement.

I’ll begin with my slide from the webinar:

g-index

 

I read out the description at the top of the table, which seems to make sense to me. I explained that I needed the four columns to calculate the g-index, reading off the titles of each column. I explained that in this instance, the g-index would be 6… but I neglected to say that this is because this is the last row on my table where the total number of citations (my right hand column) is higher than or equal to the square of g.

Why did I not say this? Because I was so busy trying to explain that we can forget about the documents that have had no citations… oh dear! (More on those “zero cites” papers later.) In my defence, this is exactly the same as saying that the citations received altogether must be at least g squared, but when presenting something that is meant to be de-mystifying, the more descriptions, the better! So, again: the g-index in my table above is the document number (g) where the total number of citations is greater than or equal to the square of g (also known as g squared).

Also on reflection, for the rows where there were “0 cites” I should also have written “does not count” instead of “93” in the “Total number of citations” column, as people naturally asked afterwards why the g-index of my Professor X was not 9. In my presentation I had tried to explain what would happen if the documents with 0 citations had actually had a citation each, which would have yielded a g-index of 9, but I was not clear enough. I should have had a second slide to show this:

extra g-index

Here we can see that the g-index would be 9 because the 9th row has the total number of citations as higher than g squared, but in the 10th row the total number of citations are less than g squared.

My “0 cites” was something of a complication and a red herring, and yet it is also a crucial concept. Because there are many, many papers out there with 0 citations, and so there will be many researchers with papers that have 0 citations.

I also found, when I went back to that original paper by Egghe, that it has a “Note added in proof” which describes a variant where papers with zero citations, or indeed fictitious papers are included in the calculation, in order to provide a higher g-index score. However I have not used the variant. In the original paper Egghe refers to “T” which is the total number of documents, or as he described it “the total number of ever cited papers”. Documents that have never been cited cannot be part of “T” and that’s why my explanation of the g-index excludes those documents with 0 citations. I believe that Egghe used this as a feature of the h-index which he valued, i.e. representing the most highly cited papers in the single number, which is why I did not use the variant.

However, others have used the variant in their descriptions of the g-index and the way they have calculated it in their papers, especially in more recent papers that I’ve come across, so this confuses our understanding of exactly what the g-index is. Perhaps that’s why the Wikipedia entry talks about an “average” because the inclusion of fictitious papers does seem to me more like calculating an average. No wonder it took me such a long time to feel that I understood this metric satisfactorily!

My advice is: whenever you read about a g-index in future, be sure that you understand what is included in “T“, i.e. which documents qualify to be included in the calculation. There are at least three possibilities:

  1. Documents that have been cited.
  2. Documents that have been published but may or may not have been cited.
  3. Entirely fictitious documents that have never been published and act as a kind of “filler” for rows in our table to help us see which “g squared” is closest to the total number of citations!

I say “at least” because of course these documents are the ones in the data set that you are using, and there will also be variability there: from one data set to another and over time, as data sets get updated. In many ways, this is no different from other bibliometric measures: understanding which documents and citations are counted is crucial to understanding the measure.

Do I think that we should use the variant or not? In Egghe’s Note, he pointed out that it made no difference to the key finding of his paper which explored the works of prestigious authors. I think that in my example, if we want to do Professor X justice for the relatively highly cited article with 50 cites, then we would spread the total of citations out across the documents with zero citations and allow him a g-index of 9. That is also what the g-index was invented to do, to allow more credit for highly cited articles. However, I’m not a fan of counting fictitious documents. So I would prefer that we stick to a g-index where “T” is “all documents that have been published and which exist in the data set, whether or not they have been cited.” So not my possibility no. 1 which is how I actually described the g-index, and not my possibility no. 3 which is how I think Wikipedia is describing it. This is just my opinion, though… and I’m a librarian rather than a bibliometrician, so I can only go back to the literature and keep reading.

One final thought: why do librarians need to understand the g-index anyway? It’s not all that well used, so perhaps it’s not necessary to understand it. And yet, knowledge and understanding of some of the alternatives to the h-index and what they are hoping to reflect will help to ensure that you and the people who you advise, be they researchers or university administrators, will all use the h-index appropriately – i.e. not on its own!

Note: the slides have been corrected since this blogpost was first published. Thanks to the reader who helped me out by spotting my typo for the square of 9!

12 reasons scholars might cite: citation motivations

I’m sure I read something similar about this once,  and then couldn’t find it again lately… so here is my quick list of reasons why researchers might cite. It includes “good” and “bad” motivations, and might be useful when considering bibliometric indicators. Feel free to comment on this post and suggest more possible motivations. Or indeed any good sources!

  1. Set own work in context
  2. Pay homage to experts
  3. Give credit to peers
  4. Criticise/correct previous work (own or others)
  5. Signposting under noticed work
  6. Provide further background reading
  7. Lend weight to own claims
  8. Self citations to boost own bibliometric scores and/or signpost own work
  9. Boost citations of others as part of an agreement
  10. Gain favour with journal editor or possible peer reviewers by citing their work
  11. Gain favour by citing other papers in the journal of choice for publication
  12. Demonstrate own wide reading/knowledge

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

Keeping up to date with bibliometrics: the latest functions on Journal Citation Reports (InCites)

I recently registered for a recent free, live, online training session on the latest functions of Journal Citation Reports (JCR) on InCites, from Thomson Reuters (TR). I got called away during the session, but the great thing is that they e-mail you a copy so you can catch up later. You can’t ask questions, but at least you don’t miss out entirely! If you want to take part in a session yourself, then take a look at the Web of Science training page. Or just read here to find out what I picked up and reflected on.

At the very end of the session, we learnt that 39 journal titles have been supressed in the latest edition. I mention it first because I think it is fascinating to see how journals go in and out of the JCR collection, since having a JCR impact factor at all is sometimes seen as a sign of quality. These supressed titles are suspended and their editors are informed why, but it is apparently because of either a high self-cite rate, or something called “stacking”, whereby two journals are found to be citing each other in such a way that they significantly influence the latest impact factor calculations. Journals can come out of suspension, and indeed new journals are also added to JCR from year to year. Here are the details of the JCR selection process.

The training session began with a look at Web of Science: they’ve made it easier to see JCR data when you’re looking at the results of a Web of Science search, by clicking on the journal title: it’s good to see this link between TR products.

Within JCR, I like the visualisation that you get when you choose a subject category to explore: this tells you how many journals are in that category and you can tell the high impact factor journals because they have larger circles on the visualisation. What I particularly like though, is the lines joining the journals: the thicker the line, the stronger the citing relationship between the journals joined by that line.

It is the librarian in me that likes to see that visualisation: you can see how you might get demand for journals that cite each other, and thus get clues about how to manage your collection. The journal profile data that you can explore in detail for an individual journal (or compare journal titles) must also be interesting to anyone managing a journal, or indeed to authors considering submitting to a journal. You can look at a journal’s performance over time and ask yourself “is it on the way up?” You can get similar graphs on SJR, of course, based on Elsevier’s Scopus data and available for free, but there are not quite so many different scores on SJR as on JCR.

On JCR, for each journal there are new “indicators”, or measures/scores/metrics that you can explore. I counted 13 different types of scores. You can also explore more of the data behind the indicators presented than you used to be able to on JCR.

One of the new indicators is the “JIF percentile”. This is apparently introduced because the quartile information is not granular or meaningful enough: there could be lots of journals in the same quartile for that subject category. I liked the normalised Eigenfactor score in the sense that the number has meaning at first glance: higher than 1 means higher than average, which is more meaningful than a standard impact factor (IF). (The Eigenfactor is based on JCR data but not calculated by TR. You can find out more about it at Eigenfactor.org, where you can also explore slightly older data and different scores, for free.)

If you want to explore more about JCR without signing up for a training session, then you could explore their short video tutorials and you can read more about the updates in the JCR Help file.