How do researchers share articles? Some useful links

This is a topic that interests me: how do researchers choose what to read? Where are the readers on our platforms coming from, when we can’t track a source URL? What are researchers doing in collaboration spaces? (Research processes are changing fast in the Internet era.) Is journal article sharing that is taking place legal and/or ethical? I’m a big fan of Carol Tenopir‘s work investigating readers’ behaviours and I think there’s much to learn in this area. Sharing an article does not equate to it having been read, but it is a very interesting part of the puzzle of understanding scholarly communication.

16649920968_f671108c56_z

Usage is something that altmetrics are displaying (the altmetric.com donut has a section for “Readers” which incorporates information from Mendeley), and it’s just possible that usage would become a score to rival the impact factor, when evaluating journals. It does often seem to me like we’re on a quest for a mythical holy grail, when evaluating journals and criticising the impact factor!

Anyway, what can we know about article sharing? In my last blogpost I highlighted BrightTALK as a way to keep up to date with library themes. The LibraryConnect channel features many useful webinars & presentations (yes, I spoke at one of them), and I recently listened to a webinar on the theme of this blogpost’s title, which went live in December 2015. My notes & related links:

Suzie Allard of the University of Tennessee (colleague of Carol Tenopir) spoke about the “Beyond Downloads” project and their survey’s main takeaways. These include that nearly 74% of authors preferred email as a method of sharing articles. Authors may share articles to aid scientific discovery in general, to promote their own work, or indeed for other reasons, nicely illustrated in an infographic on this theme!

Lorraine Estelle of Project COUNTER spoke about the need for comprehensive and reliable data, and to describe just how difficult it is to gather such data. (I can see that tracking everyone’s emails won’t go down well!) There are obviously disciplinary and demographic differences in the way that articles are shared, and therefore read, and she listed nine ways of sharing articles:

  1. email
  2. internal networks
  3. the cloud
  4. reference managers
  5. learning manager
  6. research social networks
  7. general social networks
  8. blogs
  9. other

Lorraine also introduced some work that COUNTER are doing jointly with CrossREF: DOI tracking and Distributed Usage Logging that are definitely worth further reading and investigation!

Wouter Haak from Elsevier spoke about what you can see about readers of your articles on Mendeley’s dashboard, as an author. He also spoke about a prototype they are developing for libraries, on which institutions could see the countries where collaborations are taking place from within their own institution. More intriguingly (to me), he talked about a working group that he was part of, whereby major scientific publishers are apparently agreeing to support sharing of articles amongst researchers within collaboration groups, on platforms like Mendeley, Academia.edu and ResearchGate, which he describes as “Scholarly Collaboration Networks”. Through such a collaboration, the sharing activity across these platforms could all be tracked and reported on. Perhaps it is easier to lure researchers away from email than to track emails!

 

[Photo credit: Got Credit]

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Keeping up to date with bibliometrics: the latest functions on Journal Citation Reports (InCites)

I recently registered for a recent free, live, online training session on the latest functions of Journal Citation Reports (JCR) on InCites, from Thomson Reuters (TR). I got called away during the session, but the great thing is that they e-mail you a copy so you can catch up later. You can’t ask questions, but at least you don’t miss out entirely! If you want to take part in a session yourself, then take a look at the Web of Science training page. Or just read here to find out what I picked up and reflected on.

At the very end of the session, we learnt that 39 journal titles have been supressed in the latest edition. I mention it first because I think it is fascinating to see how journals go in and out of the JCR collection, since having a JCR impact factor at all is sometimes seen as a sign of quality. These supressed titles are suspended and their editors are informed why, but it is apparently because of either a high self-cite rate, or something called “stacking”, whereby two journals are found to be citing each other in such a way that they significantly influence the latest impact factor calculations. Journals can come out of suspension, and indeed new journals are also added to JCR from year to year. Here are the details of the JCR selection process.

The training session began with a look at Web of Science: they’ve made it easier to see JCR data when you’re looking at the results of a Web of Science search, by clicking on the journal title: it’s good to see this link between TR products.

Within JCR, I like the visualisation that you get when you choose a subject category to explore: this tells you how many journals are in that category and you can tell the high impact factor journals because they have larger circles on the visualisation. What I particularly like though, is the lines joining the journals: the thicker the line, the stronger the citing relationship between the journals joined by that line.

It is the librarian in me that likes to see that visualisation: you can see how you might get demand for journals that cite each other, and thus get clues about how to manage your collection. The journal profile data that you can explore in detail for an individual journal (or compare journal titles) must also be interesting to anyone managing a journal, or indeed to authors considering submitting to a journal. You can look at a journal’s performance over time and ask yourself “is it on the way up?” You can get similar graphs on SJR, of course, based on Elsevier’s Scopus data and available for free, but there are not quite so many different scores on SJR as on JCR.

On JCR, for each journal there are new “indicators”, or measures/scores/metrics that you can explore. I counted 13 different types of scores. You can also explore more of the data behind the indicators presented than you used to be able to on JCR.

One of the new indicators is the “JIF percentile”. This is apparently introduced because the quartile information is not granular or meaningful enough: there could be lots of journals in the same quartile for that subject category. I liked the normalised Eigenfactor score in the sense that the number has meaning at first glance: higher than 1 means higher than average, which is more meaningful than a standard impact factor (IF). (The Eigenfactor is based on JCR data but not calculated by TR. You can find out more about it at Eigenfactor.org, where you can also explore slightly older data and different scores, for free.)

If you want to explore more about JCR without signing up for a training session, then you could explore their short video tutorials and you can read more about the updates in the JCR Help file.

12 Questions to ask, for basic clues on the quality of a journal

When choosing where to publish a journal article, what signs do you look out for? Here are some questions to ask or aspects to investigate, for clues.

1 – Is it peer reviewed? (Y/N and every nuance in between) See the journal’s website.
2- Who is involved in it? The editor & publisher? Are they well known & well thought of? Who has published articles there already: are these big players in your field? Read the journal!
3- Is it abstracted/indexed by one of the big sources in your field? (The journal’s website should tell you this. Big publishers also offer their own databases of house journals)
4- What happens when you search on Google for an article from the journal? Do you get the article in the top few results? And on GScholar?
5- Does it appear in Web of Science or Scopus journal rankings?
6- Take a look on COPAC: which big research libraries subscribe?
7- have a look at the UK’s published RAE2008 / forthcoming REF2014 data and see if articles from that journal were a part of the evidence submitted, and rated as 4*
8- Do the journal articles have DOIs? This is a really useful feature for promotion of your article, and it will mean that altmetric tools can provide you with evidence of engagement with your article.
9- Is there an open access option? (See SherpaRomeo) This is a requirement of many research funders, but it is also useful for you, when you want to promote your article.
10- Is it on the list of predatory OA journals? You might want to avoid those, although check for yourself. Note that some journals on the list are disputed/defended against the accusation of predation!
11- Is it listed on the ISSN centre’s ROAD: http://road.issn.org/ What does this tell you about it?
12- If you have access through a library subscription, is it listed on Ulrich’s periodicals directory? What does this tell you about it? Note the “peer review” symbol of a striped referee’s shirt: if the shirt is not there, it doesn’t necessarily mean that the journal is not peer reviewed: you may have to investigate further.
FURTHER NUANCES…
– What type of peer review is used? Is it rigorous? Is it useful to you, even if you get rejected?
– Time to rejection/acceptance: how soon do you need to be published?
– Acceptance/rejection rate
– Journal Impact Factor/ SJR score(s) /quartile for the field

Open Access (OA) and all that jazz!

Next week I’m due to visit Humboldt University’s IBI in order to participate in a students’ seminar about Open Access. I’m very much looking forward to it and thought I’d do a bit of reading to keep me up to speed on the OA themes that are trending at the moment.

A recently published article has come to my attention through a LinkedIn group I belong to: Opening Doors, by Rob Virkar-Yates

It describes some of the technical issues that need to be solved, in order to support OA, both at the “upstream” end, where articles are processed for publication, and “downstream” where articles are discovered and read by researchers. Here are a few of the issues raised in the article, along with my comments and thoughts!

1) Direct author-publisher transactions are not part of existing submission processes.
I noticed this when working at the University of Warwick Library: we had to chase both authors and publishers to get Gold OA Article Processing Charges (APCs) paid in time to spend the money allocated to Warwick by HEFCE, and the authors and departmental administrators found the processes rather frustrating and onerous, to the point where at least one author decided not to bother with Gold OA.

The article states that “the majority of academic institutions remain unclear as to how to integrate APCs into their workflows” and I’m sure that many institutions are still working it out: classic issues would be whether to handle OA financial transactions centrally or in departments, whether to use an intermediary service (see the recent RIN report on that topic), and how to ensure a fair and effective distribution of the money.

2) “Open Access is driving some exceptionally contentious changes to the peer review process.”
Virkar-Yates gives eLife and F1000 as examples of OA journals who are innovating peer review by bringing in more transparency about the way an article has been reviewed. I’m interested in the possibility that peer review might evolve as access to content is opened up, but if peer review in its traditional guise is working for academia, then it can work for OA journals just as easily as for subscription ones. That seems to be the conclusion of the Open Library of Humanities project (OLH) in a recently published UKSG e-news article. I’m cautious of worrying academics that the peer review system is under threat, because it’s contentious enough to consider a switch towards OA itself, never mind causing worry that existing, established methods for ensuring quality are about to be abandoned by publishers!

OA publication models do tend to favour bulk publishing, and in a scenario where there are more articles and more journals out there, researchers will need ways to differentiate amongst all the articles, to find the highest quality: they need to do this fairly quickly and efficiently as their time is limited. I think that the existing signs of quality, such as journal impact factor, prestige of the editor and authors, peer review practices, established position in the discipline, etc are likely to remain important for the time being at least. Even PLoS publishes journals that are tailored to disciplines, have lower acceptance rates and achieve higher impact factors than the bulk, cross-disciplinary journal, PLoS One, and OLH seems to be proposing overlay journals, on the bulk of content.

I can see why Virkar-Yates included this aspect, though: publishers of OA journals may find that there are opportunities to develop other aspects of their journals alongside the move to OA, and if you know that quality filters are important in an OA world, then you might want to find ways to add those in: instead of or as well as the traditional peer review.

3) Different formats for content.
The article says “It is now not uncommon for articles to be published with their associated data sets (or links to the data held in OA data repositories), supporting video, animation and other textual resources.” The electronic age has long since allowed publishers to experiment with the format of the journal, or the journal article. Indeed this has been happening with some titles I’ve bookmarked on Diigo, and there have long been disciplinary differences in journal article length, referencing styles, etc: the electronic journal has the capacity to be very different from the traditional print one, but the issue as Virkar-Yates points out, is how to support the different types of output, file formats, etc, on the same platform.

I wonder if the answer is not to offer more specialised types of publication, for different disciplines. I’m a big fan of the e-Crystals repository, and I’ve often wondered what we might do with data repositories, because they seem to me to be most discipline specific types of output, and most useful when they have metadata schemas designed around a specialist type of data and data need. I believe that, in a world of vast amounts of free content, it will be the way that researchers are enabled to handle that content that makes a product worth paying for, and I think this could require an element of specialisation. It’s an interesting space to watch: in Virkar-Yates own explanation of Green OA he points out that “Forty-one percent of all repository usage is through the University of Cambridge’s DSpace@Cambridge platform” and I know that its a repository that has long had a policy of taking all kinds of content, across all kinds of disciplines: is this a model for publishers to follow or should they concentrate on offering something different than repositories?

4) Lack of authentication when access is open
“A signed-in user is a known user, so publishers need to get more consumer-savvy and work out ways to incentivise registration under OA.” Good point, but I think that a lot of publishers have got this covered with their alerting services, saved lists of references and saved search history options that researchers need to sign in for. Joining this sign-in process together with other social media authentication would probably be better for researchers than signing in through institutional logins, and with many platforms the publishers don’t know so much about researchers other than what institution s/he belongs to after authentication, in any case. But perhaps that is precisely what they need to know, so that they can tell Libraries what an invaluable product they are subscribing to!

5) Optimisation for Google by removal of paywalls
Well, this makes sense to me, even though I am a Librarian. I don’t think we’ve been burying our heads in the sand, as the author claims that we have: we’ve simply been trying to point out to researchers that Google doesn’t access all the content that they need, and that there are more powerful ways of searching than the simple keyword that Google uses, when it comes to scholarly content. That doesn’t mean that we would be against Google indexing that scholarly content, if it did it well. In fact, Librarians have also been trying to teach researchers how to get the most out of Google and Google Scholar.

6) Multiple & portable devices
“…all content platforms, and particularly Open Access platforms, need to face up to the very real and pressing technical challenge of how to seamlessly deliver content across multiple untethered devices.” Says it all, for me!

7) Hybrid journals where some content is OA, some is behind a paywall
I’ve never been a fan of hybrid journals as an OA solution, because there isn’t a way for our researchers to know when an article is available to them as an OA one, when their institution doesn’t subscribe to that particular journal. One of the things I used to tell researchers to do when they wanted an article, was to search Google for an OA version. It’s one of the things that I used to have to check document supply request forms for, and frequently found, even some years ago. Hybrid is better than no OA at all, but as Virkar-Yates points out, there is a real issue around the metadata at article level, to make sure that open access content is in fact accessible!

Virkar-Yates’ article prompts much thought and that touches on some very important issues, but there are more that I’d like to consider:

a) Monographs
This topic is suggested in Virkar-Yates’ article, when he discusses output format variety, but monographs seem to me to be a specific issue. OLH are investigating this topic over the next few years, Open Book Publishers have just won an award and the Wellcome Trust have just announced plans to extend their OA policy to include monographs and book chapters, according to this Times Higher Education article, although I note that this extension does not include the CC-BY (Creative Commons Attribution) requirement that exists for journal articles.

b) Copyright
One of the hurdles for OA is to differentiate between access by a reader and access that allows further copying: five years ago, when I was establishing Warwick’s repository, WRAP, it seemed clear to me that the priority was to allow readers to have access. Every item in WRAP had a cover sheet explaining that the copyright remained with the publisher or author and that copying of the repository item was not granted by the repository. Allowing Creative Commons licences to be attached to items was a development that I would have liked to have added (and I know that Loughborough University’s repository has always asked for one), but I knew that there were already a lot of hurdles to deposit and that frankly, a requirement to add a licence that the author had never seen before and quite often did not understand would be one hurdle too many.

I expected that WRAP could overcome it in time and indeed I can see amongst the latest additions to WRAP that some do have cover sheets explaining that a CC licence applies. The RCUK OA policy expects the copyright issue to be addressed, as they have followed the Wellcome Trust in making requirements for not only OA, but also CC licences. A large national body like the RCUK has a way of reaching and influencing researchers that a new repository manager does not have!

c) Platinum OA
This was described in an Information Research article from 2007, and it’s essentially where researchers publish OA journals for themselves. It doesn’t quite fit the remit of Virkar-Yates’ article, in the sense that most researchers won’t be able to do this and be at the cutting edge of technology in publishing practice! But in the rise of OA, there has been a rise in the number of OA journal titles (as evidenced by the reported titles listed by the DOAJ, which the Virkar-Yates refers to), many of which originate from the research community.

My final thought is that I should read the recent JISC/RLUK survey report, on the attitudes and behaviours of researchers, which apparently reveals their reliance on open access… but that’s too much for one sitting!