Keeping up to date with bibliometrics: the latest functions on Journal Citation Reports (InCites)

I recently registered for a recent free, live, online training session on the latest functions of Journal Citation Reports (JCR) on InCites, from Thomson Reuters (TR). I got called away during the session, but the great thing is that they e-mail you a copy so you can catch up later. You can’t ask questions, but at least you don’t miss out entirely! If you want to take part in a session yourself, then take a look at the Web of Science training page. Or just read here to find out what I picked up and reflected on.

At the very end of the session, we learnt that 39 journal titles have been supressed in the latest edition. I mention it first because I think it is fascinating to see how journals go in and out of the JCR collection, since having a JCR impact factor at all is sometimes seen as a sign of quality. These supressed titles are suspended and their editors are informed why, but it is apparently because of either a high self-cite rate, or something called “stacking”, whereby two journals are found to be citing each other in such a way that they significantly influence the latest impact factor calculations. Journals can come out of suspension, and indeed new journals are also added to JCR from year to year. Here are the details of the JCR selection process.

The training session began with a look at Web of Science: they’ve made it easier to see JCR data when you’re looking at the results of a Web of Science search, by clicking on the journal title: it’s good to see this link between TR products.

Within JCR, I like the visualisation that you get when you choose a subject category to explore: this tells you how many journals are in that category and you can tell the high impact factor journals because they have larger circles on the visualisation. What I particularly like though, is the lines joining the journals: the thicker the line, the stronger the citing relationship between the journals joined by that line.

It is the librarian in me that likes to see that visualisation: you can see how you might get demand for journals that cite each other, and thus get clues about how to manage your collection. The journal profile data that you can explore in detail for an individual journal (or compare journal titles) must also be interesting to anyone managing a journal, or indeed to authors considering submitting to a journal. You can look at a journal’s performance over time and ask yourself “is it on the way up?” You can get similar graphs on SJR, of course, based on Elsevier’s Scopus data and available for free, but there are not quite so many different scores on SJR as on JCR.

On JCR, for each journal there are new “indicators”, or measures/scores/metrics that you can explore. I counted 13 different types of scores. You can also explore more of the data behind the indicators presented than you used to be able to on JCR.

One of the new indicators is the “JIF percentile”. This is apparently introduced because the quartile information is not granular or meaningful enough: there could be lots of journals in the same quartile for that subject category. I liked the normalised Eigenfactor score in the sense that the number has meaning at first glance: higher than 1 means higher than average, which is more meaningful than a standard impact factor (IF). (The Eigenfactor is based on JCR data but not calculated by TR. You can find out more about it at Eigenfactor.org, where you can also explore slightly older data and different scores, for free.)

If you want to explore more about JCR without signing up for a training session, then you could explore their short video tutorials and you can read more about the updates in the JCR Help file.

Story telling and new ideas to listen to, for information professionals

When I’m just warming up of a morning, I like to listen to BBC Radio 4 podcasts. I’ve been picking my way through the series called Four Thought, where speakers share stories and ideas. There are three episodes in particular that I’d like to highlight for information professionals:

Maria Popova: The Architecture of Knowledge – a fascinating look at the way we handle information and create wisdom, incorporating views on knowledge from history but considering the modern, digital era of information overload. A great story!

Rupert Goodwins – tracks human behaviour on the Internet and considers: How can the Internet bring us together to discuss and share with each other in a respectful, reasoned way? How can we avoid arguments and incivility? The speaker has lots of experience and ideas.

This last talk is of interest because of the course I’ve been teaching at the Humboldt Uni IBI, on Information ethics. In the course, we explore all sorts of issues, including policies for websites that the students as information professionals of the future might play a part in hosting, and the ethical matters behind them, such as authenticity vs anonymity, moderating comments, handling whistleblowers, etc.

Another Four Thought that I found a little bit uncomfortable to listen to was:

Cindy Gallop: Embracing Zero Privacy – recommends taking control of your digital presence, and I agree with that. The speaker has some good ideas, chiefly that “we are what we do” in a very positive and empowering way, but what I find difficult is the notion that we can all live in such an open way. What about people who live in a society that is unaccepting of who they are?What about mistakes from the past, for which a debt has been paid: should they be laid forever bare? What about keeping a personal life personal, even whilst sharing matters of professional interest? On balance, I’m not a fan of zero privacy but this talk is a great opener for discussion.

There are plenty of other talks that provide food for thought in the Radio 4 podcast archives, on all sorts of topics and not only in the Four Thought series. I also like the Reith Lectures, the “Life Scientific”, and “In Our Time”… so much more to listen to!

Peer review motivations and measurement

Yesterday’s blogpost by David Crotty on Scholarly Kitchen, outlines the problems with the notion of giving credit for peer review. It is very thought provoking, although I’m personally still keen to see peer review done in the open, and to explore the notion of credit for peer review some more. For me the real question is not whether to measure it, but how best to measure it and what value to set on that measure.

Both the blogpost and its comments discuss researchers’ current motivation for carrying out peer review:

  • To serve the community & advance the field (altruism?)
  • To learn what’s new in the field (& learn before it is published, i.e. before others!)
  • To impress editors/publishers (& thereby increase own chances of publication)
  • To contribute to a system in which their own papers will also benefit (self interest?)

Crotty writes that problems in peer review would arise from behavioural change amongst researchers if we change their motivation such that they will chase credit points. He poses some very interesting questions, including:

How much career credit should a researcher really expect to get for performing peer review?

I think that’s a great question! However, I do think that we should investigate potential ways to give credit for peer review. I’ve previously blogged about the problems with peer review and followed up on those thoughts and I’ve no doubt that I’ll continue to give this space more thought: peer review is about quality, and as a librarian at heart, I’m keen that we have good quality information available as widely as possible.

In David Crotty’s post I am particularly concerned by the notion that researchers, as currently intrinsically motivated, will be prepared to take on higher workloads. I don’t want that for researchers: they are already under enormous amounts of pressure. Not all academics can work all waking hours. Some actually do (at least some of the time), I know, but presumably someone else cleans and cooks for them (wives? paid staff?), and even if all researchers had someone to do that for them, it’s not fair to the researchers or even good for academia, to comprise such isolated individuals.

One commenter makes the point that all peer reviews are not alike and that some might take a day, some 20 minutes, so if credit is to be given along the lines of how many reviews someone has carried out, well this won’t be quite fair. And yet, as Crotty argued in his blogpost, if you complicate your measurement then it’s really overkill because no-one really cares to know more than a simple count. Perhaps that’s a part of what needs fixing with peer review: a little more uniformity of practice. Is it fair to the younger journals (probably with papers from early career researchers who don’t trust themselves to submit to the journal giants) that they get comparatively cursory time from peer reviewers?

Another comment mentions that the current system favours free riding: not everyone carries out peer review, even though everyone benefits from the system. The counterpoint to this is in another comment which points out that there is already a de facto system of credit, in that journal editors are aware of who is carrying out peer review, and they wield real power, reviewing papers and sitting on funding panels. I’m not sure that I’d want to rely on a busy editor’s memory to get the credit I deserved, but the idea reminded me of how the peer review system has worked up until now, and the issue seems to be that the expanding, increasingly international research and publishing community is no longer as close-knit as it once was.

There is a broader issue here. Crotty suggested that university administrators would not want researchers to take the time to do peer review, but to do original research all the time since that’s what brings in the money and the glory. But in order to be a good researcher (and pull in the grant funding), one has to read others’ papers, and be aware of the direction of research in the field. Plus, review papers are often more highly cited than original research papers, so surely those administrators will want researchers who produce review papers and pull in the citations? Uni rankings often use bibliometric data, and administrators do care about those!

What we’re really talking about, is ‘how to measure researchers’ performance’, and perhaps peer review (if openly measured) is a part of that but perhaps also not. I like the notion of some academics becoming expert peer reviewers, whilst others are expert department/lab leaders or grant writers, or authors or even teachers. We all have different strengths and perhaps it’s not realistic to expect all of our researchers to do everything, but if you want a mixture in your team then you need to know who is doing what.

I’d like to finish with Kent Anderson’s thoughtful comment about retaining excellent reviewers:

Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer).

Let’s think on it some more…

Clear out your e-mail inbox with Boomerang!

This is the story of why I like to use Boomerang. It works with Google mail so if you don’t use Gmail or don’t want Google to have your e-mails then it’s probably not for you. (Although you might find this post by Benjamin Mako Hill an interesting read, if you are keen to block Google from accessing your emails. I digress…).

If you’re like me and use your e-mail inbox as a bit of a “to do” list, well you probably know that it isn’t the most efficient of such lists. You probably have another, real to do list somewhere else (my pieces of paper floating round my desk) and have to balance your inbox with that list/those lists. Maybe, like me, you also leave messages for yourself on your calendar for any important deadlines, and every now and then you try to block out time on your calendar and plan in advance so that your colleagues can also see how busy you are when you’re available.

I once read somewhere that every time you have to read an e-mail twice, you’re wasting time! And yet you sometimes do have to read them twice: once to know that it’s nothing urgent (perhaps on your smart phone, of an evening), and then a second time when you’re ready to deal with its contents (eg the next working day). OK, so checking e-mails on your smartphone like this is definitely a waste of time, but sometimes you read stuff at work and know that you can come back to it in a couple of days, or even later.

Then, you leave it in your inbox and later you have to wade through your inbox to get to the e-mail that you know is now urgent/important, and there’s a risk that you might not remember it in time. Who hasn’t had to apologise to someone for leaving their e-mail buried for too long? I know it’s not just me…

With Boomerang though, I can send emails out of my inbox, and set them to come back at a time when I will need to/be able to deal with them. You get 10 such “boomerangs” per month for free: it definitely helps to keep the clutter out of my inbox.

Now all you have to do is to get rid of the uneasy feeling that just because your inbox is not packed full, that does not mean that you have no work to do!

A super-quick way to create a blog post!

There are 2 super-quick ways to create blogposts in WordPress that I’ve tried out, although if you read my investigations below, you’ll see why I only recommend the first one!

1) the “Re-blog” option.
Found something interesting on another WordPress blog? You could tweet about it, or you could actually re-blog it to your own blog. Here is an example of my use of the re-blogging feature, which I like but use sparingly. After all, this is my blog: it’s for my work! For me personally, re-blogging also feels a bit like cheating but I’m growing used to it. There is actually something very social about re-blogging and I wouldn’t mind at all if others re-blogged my posts. So on reflection, its OK from time to time and for particularly well written stuff!

2) the WordPress bookmarklet
This post actually began when I pressed on the “blogpost” bookmarklet, to generate a blog post from a webpage. It generated a title for me:

Researchers argue for standard format to cite lab resources : Nature News & Comment

And then in the content it simply had:

via Researchers argue for standard format to cite lab resources : Nature News & Comment.

Hmm, not so pretty or so useful to readers. This is not really super-quick because it requires me to add more content. I suppose it’s useful as a way for me to create a quick draft post that I can come back to, if I want to blog about a particular webpage.

Thanking for re-tweets: efficient, friendly & worth a try

Twitter is really social media and not just a broadcast & info consumption channel. Sometimes though, it’s hard to find time to invest in being more social. Saying thanks for a re-tweet is something I’ve already blogged about, but I’ve never felt that I’ve got entirely the right approach. What happens when I’m on holiday, or ill, or just too occupied with other things?

Recently I saw a thank you to me, and I noticed that it was from a service that auto-tweets, but I still thought it sounded nice so I investigated. In general, I don’t value auto-tweets, and I don’t want to automatically, meaninglessly thank folks for everything, but I really like what Sumall do. Here is an example of a tweet that they sent out on my behalf:

My best RTs this week came from: @aleebrahim @SciPubLab @ilk21 #thankSAll Who were yours? http://sumall.com/thankyou 

This was favourited and re-tweeted by one of the recipients, so I’m not alone in liking the way these tweets are written!

Be sure to investigate the settings if you use Sumall. You might want to unsubscribe from the daily email reports if you’re not a social media pro. You can also edit your Twitter preferences and tell it not to bother bragging about your Twitter performance every week/month. And you can perhaps use it to investigate some stats so that you know which are your high-hitting tweets, so that you can strategically brag to your own managers!

 

How journals could “add value”

jennydelasalle:

Some great ideas of what academic authors & readers can ask publishers for. Especially regarding handling & presenting data.

Originally posted on opiniomics:

I wrote a piece for Genome Biology, you may have read it, about open science.  I said a lot of things in there, but one thing I want to focus on is how journals could “add value”.  As brief background: I think if you’re going to make money from academic publishing (and I have no problem if that’s what you want to do), then I think you should “add value”.  Open science and open access is coming: open access journals are increasingly popular (and cheap!), preprint servers are more popular, green and gold open access policies are being implemented etc etc. Essentially, people are going to stop paying to access research articles pretty soon – think 5-10 year time frame.

So what can journals do to “add value”?  What can they do that will make us want to pay to access them?  Here are a few ideas…

View original 422 more words

Working from home works for me!

I already blogged the things I like about working from home… so here is its opposite. Four things that are not so great and how I overcome them.

  1. It can be lonely: telephone and videoconferencing help to overcome this, but really, loneliness isn’t something I struggle with. All the e-mail interactions help, too.
  2. I lack a change of scenery in my day. But when I get a change, it really helps: it’s amazing how much a little lunch time walk can lift my spirits and inspire me.
  3. It’s annoying sometimes when the weather is great and I don’t even get a commute in it. I can always just step out onto the balcony for a breath of fresh air, though. That is better than working in most offices!
  4. I have to cook and wash up for myself at tea/lunch time… as well as all the breakfast and dinner things…

I got 7 advantages and only 4 disadvantages, so there’s proof that it works for me.

Quality measurement: we need landscape-reading skills. 5 tips!

OLYMPUS DIGITAL CAMERA

The academic publishing landscape is a shifting one. I like to watch the ALPSP awards, to see what’s happening in academic publishing, across the disciplines, and indeed to keep an eye on the e-learning sector. Features of the landscape are shifting under our feet in the digital age, so how can we find our way through them? I think that we need to be able to read the landscape itself. You can skip to the bottom of this post for my top tips, or read further for more explanation & links!

One of the frequent criticisms levelled at open access journals, has been that they were not all about high quality work. Indeed, with an incentive to haul in as many author payments as possible, a publisher might be tempted to lower the quality threshold and publish more articles. An article in the Guardian by Curt Rice, from two years ago explains some of this picture, and more.

However, quality control is something important to all journals, whether OA or not: in order to attract the best work, they have to publish it alongside similar quality articles. Journal and publisher brands matter. As new titles, often with new publishers, OA journals once needed to establish their quality brands: this is no longer the case for all OA journals. Andrew Bonamici wrote a nice blogpost on identifying the top OA journals in 2012.

And of course, OA journals, being new and innovative, have had the opportunity to experiment with peer review mechanisms. Peer review is the gold standard of quality filters for academic journals, as I explored in earlier blogposts.  So, messing with this is bound to lead to accusations of lowering the quality! But not all OA journals vary from the gold standard: many use peer review, just as traditional journals do.

In reality, peer review happens in different ways at different journals. It might be open, blind or double blind. It might be carried out by two or three reviewers, and an editor might or might not have a final decision. The editor might or might not mediate the comments as sent back to the author, in order to assist in the article’s polishing. The peer reviewers might get guidelines on what is expected of them, or not. There is a variety of practice in peer review, from one discipline to the next, and one publisher to the next, if not from one journal to the next. And as the Guardian article I mentioned earlier points out, dummy or spoof articles have been known to make it through peer review processes. So peer review itself is not always a garuantee of quality. Rather, is a sign to watch out for, in our landscape.

For some academic authors there are quality lists for their discipline, but how good are the lists? A recent article in the Times Higher by Dennis Tourish criticises the ABS guide to journal quality, which has often been used in business and management studies. Australia’s ERA once used journal rankings, but dropped them, as this article by Jill Rowbotham described.

Fortunately, academics know how to think for themselves. They know how to question what they find. They don’t always accept what they’re told! So, we librarians can tell them where to find such lists. We can show them how to look up a journal h-index or its impact factor, and we can explain what a cited half life is (I like Anne-Wil Harzing’s website for information on this). But, as with the traditional reference interview, the real skill for the author is in knowing what what you need.

There will always be a compromise: a slightly lower ranked journal that has a faster turnaround. A slower journal that has better peer review mechanisms for helping you to polish your work. The fast, innovative young journal that will market your work heavily. Not to mention the match of the subject of the article! There are many factors for the author to consider.

So how do we read the landscape? Here are my tips:

  1. We can take a look at the old guides, of course: the lists are not completely redundant but we need to question whether what we see matches what they describe.
  2. We can question whether a score or measure is for a characteristic that we value.
  3. We can talk to people who have been there before, i.e. experienced, published authors.
  4. We can tentatively scout ahead, and try a few things out with our most experimental work.
  5. We can scan the horizon, and watch what pioneers are doing: what works well there? As well as the sources I mention in my opening paragraph, I like to read the Scholarly Kitchen for horizon scanning.

Ultimately, we need to be alert, and to draw on all our knowledge and experiences, we need to be open and aware of our publishing needs. The best way to do this is to be a reader and consumer of published outputs in your discipline, and a member of the academic community. That way, you will know what your goal looks like, and you’ll recognise it when you see it, out there in the shifting sands of academia.