Ensuring quality and annotating scientific publications. A summary of a Twitter chat

Screenshot of twitter conversation
Tweet tweet!

Last year (yes, I’m slow to blog!), I had a very productive conversation (or couple of conversations) on Twitter with a former colleague & scientist at the University of Warwick, Andrew Marsh, which are worth documenting here as a way to give them a narrative, and to illustrate how Twitter sometimes works.

Back in November 2015, Andrew tweeted to ask who would sign reviews of manuscripts, when reporting on a presentation by Chief Editor of Nature Chemistry,  Stuart Cantrill. I replied on Twitter by asking whether such openness would make the reviewers take more time over their reviews (thereby slowing peer review down). I wondered whether openness would make reviewers less direct and so therefore possibly less helpful as more open to interpretation. Also, whether such open criticisim would drive authors to engage in even more “pre-submission”, informal peer reviewing.

Andrew tells me that, at the original event “a show of hands and brief discussion in the room revealed that PIs or those who peer reviewed manuscripts regularly, declared themselves happy to reveal their identity whereas PhD students or less experienced researchers felt either unsure or uncomfortable in doing so.”

Our next chat was kick-started when Andrew pointed me to a news article from Nature that highlighted a new tool for annotating web pages, Hypothes.is. In our Twitter chat that ensued we considered:

  1. Are such annotations a kind of post-publication peer review? I think that they can work alongside traditional peer review, but as Andrew pointed out, they lack structure so they’re certainly no substitute.
  2. Attribution of such comments is important so that readers would know whose comments they are reading, and also possibly enable tracking of such activity, so that the work could be measured. Integration with ORCID would be a good way to attribute comments. (This is already planned, it seems: Dan Whaley picked up on our chat here!)
  3. Andrew wondered whether tracking of such comments could be done for altmetrics. Altmetric.com responded. Comments on Hypothes.is could signal scholarly attention for the work which they comment on, or indeed attract attention themselves. It takes a certain body of work before measuring comments from such a source becomes valuable, but does measuring itself incentivise researchers to comment? I’m really interested in the latter point: motivation cropped up in an earlier blogpost of mine on peer review. I suspect that researchers will say that measurement does not affect them, but I’m also sure that some of those are well aware of, eg their ResearchGate score!
  4. Such a tool offers a function similar to marginalia and scrawls in library books. Some are helpful shortcuts (left by altruists, or just those who wanted to help their future selves?!), some are rubbish (amusing at their best), and sometimes you recognise the handwriting of an individual who makes useful comments, hence the importance of attribution.
  5. There are also some similarities with social bookmarking and other collaboration tools online, where you can also publish reviews or leave comments on documents and publications.

And who thought that you couldn’t have meaningful conversations on Twitter?! You can also read responses on Twitter to eLife‘s tweet about its piece on the need for open peer review.

The best part of this conversation between Andrew and me on Twitter was the ability to bring in others, by incorporating their Twitter handles. We also picked up on what others were saying, like this tweet about journal citation distributions from Stephen Curry. The worst parts were trying to be succinct when making a point (and wanting to develop some points); feeling a need to collate the many points raised and forgetting to flag people sometimes.

Just as well you can also blog about these things, then!

 

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

Keeping up to date with bibliometrics: the latest functions on Journal Citation Reports (InCites)

I recently registered for a recent free, live, online training session on the latest functions of Journal Citation Reports (JCR) on InCites, from Thomson Reuters (TR). I got called away during the session, but the great thing is that they e-mail you a copy so you can catch up later. You can’t ask questions, but at least you don’t miss out entirely! If you want to take part in a session yourself, then take a look at the Web of Science training page. Or just read here to find out what I picked up and reflected on.

At the very end of the session, we learnt that 39 journal titles have been supressed in the latest edition. I mention it first because I think it is fascinating to see how journals go in and out of the JCR collection, since having a JCR impact factor at all is sometimes seen as a sign of quality. These supressed titles are suspended and their editors are informed why, but it is apparently because of either a high self-cite rate, or something called “stacking”, whereby two journals are found to be citing each other in such a way that they significantly influence the latest impact factor calculations. Journals can come out of suspension, and indeed new journals are also added to JCR from year to year. Here are the details of the JCR selection process.

The training session began with a look at Web of Science: they’ve made it easier to see JCR data when you’re looking at the results of a Web of Science search, by clicking on the journal title: it’s good to see this link between TR products.

Within JCR, I like the visualisation that you get when you choose a subject category to explore: this tells you how many journals are in that category and you can tell the high impact factor journals because they have larger circles on the visualisation. What I particularly like though, is the lines joining the journals: the thicker the line, the stronger the citing relationship between the journals joined by that line.

It is the librarian in me that likes to see that visualisation: you can see how you might get demand for journals that cite each other, and thus get clues about how to manage your collection. The journal profile data that you can explore in detail for an individual journal (or compare journal titles) must also be interesting to anyone managing a journal, or indeed to authors considering submitting to a journal. You can look at a journal’s performance over time and ask yourself “is it on the way up?” You can get similar graphs on SJR, of course, based on Elsevier’s Scopus data and available for free, but there are not quite so many different scores on SJR as on JCR.

On JCR, for each journal there are new “indicators”, or measures/scores/metrics that you can explore. I counted 13 different types of scores. You can also explore more of the data behind the indicators presented than you used to be able to on JCR.

One of the new indicators is the “JIF percentile”. This is apparently introduced because the quartile information is not granular or meaningful enough: there could be lots of journals in the same quartile for that subject category. I liked the normalised Eigenfactor score in the sense that the number has meaning at first glance: higher than 1 means higher than average, which is more meaningful than a standard impact factor (IF). (The Eigenfactor is based on JCR data but not calculated by TR. You can find out more about it at Eigenfactor.org, where you can also explore slightly older data and different scores, for free.)

If you want to explore more about JCR without signing up for a training session, then you could explore their short video tutorials and you can read more about the updates in the JCR Help file.

Quality measurement: we need landscape-reading skills. 5 tips!

OLYMPUS DIGITAL CAMERA

The academic publishing landscape is a shifting one. I like to watch the ALPSP awards, to see what’s happening in academic publishing, across the disciplines, and indeed to keep an eye on the e-learning sector. Features of the landscape are shifting under our feet in the digital age, so how can we find our way through them? I think that we need to be able to read the landscape itself. You can skip to the bottom of this post for my top tips, or read further for more explanation & links!

One of the frequent criticisms levelled at open access journals, has been that they were not all about high quality work. Indeed, with an incentive to haul in as many author payments as possible, a publisher might be tempted to lower the quality threshold and publish more articles. An article in the Guardian by Curt Rice, from two years ago explains some of this picture, and more.

However, quality control is something important to all journals, whether OA or not: in order to attract the best work, they have to publish it alongside similar quality articles. Journal and publisher brands matter. As new titles, often with new publishers, OA journals once needed to establish their quality brands: this is no longer the case for all OA journals. Andrew Bonamici wrote a nice blogpost on identifying the top OA journals in 2012.

And of course, OA journals, being new and innovative, have had the opportunity to experiment with peer review mechanisms. Peer review is the gold standard of quality filters for academic journals, as I explored in earlier blogposts.  So, messing with this is bound to lead to accusations of lowering the quality! But not all OA journals vary from the gold standard: many use peer review, just as traditional journals do.

In reality, peer review happens in different ways at different journals. It might be open, blind or double blind. It might be carried out by two or three reviewers, and an editor might or might not have a final decision. The editor might or might not mediate the comments as sent back to the author, in order to assist in the article’s polishing. The peer reviewers might get guidelines on what is expected of them, or not. There is a variety of practice in peer review, from one discipline to the next, and one publisher to the next, if not from one journal to the next. And as the Guardian article I mentioned earlier points out, dummy or spoof articles have been known to make it through peer review processes. So peer review itself is not always a garuantee of quality. Rather, is a sign to watch out for, in our landscape.

For some academic authors there are quality lists for their discipline, but how good are the lists? A recent article in the Times Higher by Dennis Tourish criticises the ABS guide to journal quality, which has often been used in business and management studies. Australia’s ERA once used journal rankings, but dropped them, as this article by Jill Rowbotham described.

Fortunately, academics know how to think for themselves. They know how to question what they find. They don’t always accept what they’re told! So, we librarians can tell them where to find such lists. We can show them how to look up a journal h-index or its impact factor, and we can explain what a cited half life is (I like Anne-Wil Harzing’s website for information on this). But, as with the traditional reference interview, the real skill for the author is in knowing what what you need.

There will always be a compromise: a slightly lower ranked journal that has a faster turnaround. A slower journal that has better peer review mechanisms for helping you to polish your work. The fast, innovative young journal that will market your work heavily. Not to mention the match of the subject of the article! There are many factors for the author to consider.

So how do we read the landscape? Here are my tips:

  1. We can take a look at the old guides, of course: the lists are not completely redundant but we need to question whether what we see matches what they describe.
  2. We can question whether a score or measure is for a characteristic that we value.
  3. We can talk to people who have been there before, i.e. experienced, published authors.
  4. We can tentatively scout ahead, and try a few things out with our most experimental work.
  5. We can scan the horizon, and watch what pioneers are doing: what works well there? As well as the sources I mention in my opening paragraph, I like to read the Scholarly Kitchen for horizon scanning.

Ultimately, we need to be alert, and to draw on all our knowledge and experiences, we need to be open and aware of our publishing needs. The best way to do this is to be a reader and consumer of published outputs in your discipline, and a member of the academic community. That way, you will know what your goal looks like, and you’ll recognise it when you see it, out there in the shifting sands of academia.