Not enough time for reading in academia: can we measure it?

I wanted to explore a topic which has been popular on Twitter, at least amongst the tweets I saw over the summer: that of academics struggling to find the time to read. I’ve written this blogpost in something of a “summer exploration” spirit, since I connected this topic with my interest in bibliometrics.

During the summer there were many  mentions of the importance of reading in academia, on Twitter. Reading of any kind is important for training our minds to think. It’s important for training our own ability with words, our writing skills. And it’s important for keeping uptodate with academic discoveries and developments in fields of interest, to name but a few advantages of reading. Pat Thomson is eloquent on the matter.

As a librarian by background, of course I’m a big fan of reading! But I see how pressure on scholars and researchers to publish, to bring in research grants and to contribute to other activities that are measured in performance evaluations and university rankings might actually be causing them to read less. I may be doing researchers a disservice to suggest that they are reading less, but I’m being sympathetic. Carol Tenopir’s 2014 research into reading via questionnaires and academics’ self-reporting is outlined on the Scholarly Kitchen blog: at first it did look like there was a decline in reading, but in the end the research might only indicate that a plateau was reached, at a time when the volume of content being published is increasing. This might make some scholars feel that they are unable to keep up with their field.

My provocative thought goes like this: If focussing on publication outputs and measuring them via bibliometrics has led to a lack of reading time (which I’m a long way off proving), then perhaps the solution is to also measure (and give credit for) time invested in reading!

Disciplinary differences are at the core of academic reading habits, evidenced by studies of library impact on students, among others. Such studies have involved attempts to correlate student grades with library accesses, as explored in this 2015 paper :

Here there is some correlation of “quality” academic performance and library accesses, although the main conclusion seems to be the importance of the library when it comes to student retention. I also remember reading Graham Stone’s earlier work (cited in the paper above), and the importance of data protection issues. These studies identify cohorts of students rather than individuals and their grades due to ethical (and legal) concerns which apply when it comes to researchers, too.

We must also remember that much content is not digital, or not in the library, whether physical or online. Increasingly, scholarly content is available online via open access, so we don’t need to be identifiably logged in to read it. And indeed, Tenopir’s later work reminds us that content once downloaded can be re-read or shared, outside of the publisher or library platforms. Automatically measuring reading to any degree of accuracy becomes possible only if you dictate how and where academic reading is to be done. Ethical concerns abound!

Instead of measuring time spent reading or volumes of content downloaded or accessed by researchers, perhaps we could give credit to researchers who cite more. After all, citations are an indication that the authors have read a paper, aren’t they? OK, I am being prococative again: how do we know which co-authors have read which of the cited papers? How do we know that a cited paper is one that has been read in full: what if the pre-print has been read rather than the version of record, or only the abstract? Such doubts about what it means to read a paper are expressed in the comments of the Scholarly Kitchen post mentioned earlier.

Actually, we could say that reading and citations are already indirectly assessed, because we evaluate written outputs and publications, and their quality reflects the amount and quality of reading behind them. I think that’ll have to do, because the more I read about academic reading, the more I think we can’t know! How we evaluate the outputs is another matter, of course. I’ve blogged about peer review, but not article level metrics – yet.

I tried to track down Tenopir’s published paper, based on the self-reported questionnaire research critiqued on the Scholarly Kitchen. I think it must be the paper entitled “Scholarly article seeking, reading, and use: a continuing evolution from print to electronic in the sciences and social sciences” The critiquing all occurred before the paper was published, so direct links weren’t provided. Research into how much researchers are reading, whether based on downloads or questionnaires can illustrate disciplinary differences, or signal changes in research practice over time. Tenopir and her co-authors shed light on this, and opened more questions to be answered. I wonder whether researchers could be persuaded to allow tracking software to spy on their reading habits for a limited period… there is much more to be explored in this area but I’m sure that we won’t gain trust by suggesting reading metrics!

Image credit: CC0 Pixabay.

 

Advertisements

How do researchers share articles? Some useful links

This is a topic that interests me: how do researchers choose what to read? Where are the readers on our platforms coming from, when we can’t track a source URL? What are researchers doing in collaboration spaces? (Research processes are changing fast in the Internet era.) Is journal article sharing that is taking place legal and/or ethical? I’m a big fan of Carol Tenopir‘s work investigating readers’ behaviours and I think there’s much to learn in this area. Sharing an article does not equate to it having been read, but it is a very interesting part of the puzzle of understanding scholarly communication.

16649920968_f671108c56_z

Usage is something that altmetrics are displaying (the altmetric.com donut has a section for “Readers” which incorporates information from Mendeley), and it’s just possible that usage would become a score to rival the impact factor, when evaluating journals. It does often seem to me like we’re on a quest for a mythical holy grail, when evaluating journals and criticising the impact factor!

Anyway, what can we know about article sharing? In my last blogpost I highlighted BrightTALK as a way to keep up to date with library themes. The LibraryConnect channel features many useful webinars & presentations (yes, I spoke at one of them), and I recently listened to a webinar on the theme of this blogpost’s title, which went live in December 2015. My notes & related links:

Suzie Allard of the University of Tennessee (colleague of Carol Tenopir) spoke about the “Beyond Downloads” project and their survey’s main takeaways. These include that nearly 74% of authors preferred email as a method of sharing articles. Authors may share articles to aid scientific discovery in general, to promote their own work, or indeed for other reasons, nicely illustrated in an infographic on this theme!

Lorraine Estelle of Project COUNTER spoke about the need for comprehensive and reliable data, and to describe just how difficult it is to gather such data. (I can see that tracking everyone’s emails won’t go down well!) There are obviously disciplinary and demographic differences in the way that articles are shared, and therefore read, and she listed nine ways of sharing articles:

  1. email
  2. internal networks
  3. the cloud
  4. reference managers
  5. learning manager
  6. research social networks
  7. general social networks
  8. blogs
  9. other

Lorraine also introduced some work that COUNTER are doing jointly with CrossREF: DOI tracking and Distributed Usage Logging that are definitely worth further reading and investigation!

Wouter Haak from Elsevier spoke about what you can see about readers of your articles on Mendeley’s dashboard, as an author. He also spoke about a prototype they are developing for libraries, on which institutions could see the countries where collaborations are taking place from within their own institution. More intriguingly (to me), he talked about a working group that he was part of, whereby major scientific publishers are apparently agreeing to support sharing of articles amongst researchers within collaboration groups, on platforms like Mendeley, Academia.edu and ResearchGate, which he describes as “Scholarly Collaboration Networks”. Through such a collaboration, the sharing activity across these platforms could all be tracked and reported on. Perhaps it is easier to lure researchers away from email than to track emails!

 

[Photo credit: Got Credit]

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Peer review of journal articles: how good is it really? A librarian evaluates an evaluation system, for scholarly information sources.

Peer review is a signifier of quality in the scholarly world: it’s what librarians (like me) teach students to look out for, when evaluating information sources. In this blog post, I explore some of the uses, criticisms and new developments in the arena of scholarly peer reviewing and filtering for quality. My evaluation of this evaluation system is fairly informal, but I’ve provided lots of useful links.

What is peer review?

It varies from one process to the next, but ideally, scholarly journal articles are chosen and polished for publication by a number of other scholars or peers in a process known as peer review, or sometimes called refereeing. Sometimes only two reviewers are used per article, sometimes three are used, plus of course the journal editor and editorial board have roles in shaping what sort of content is accepted in the journal.

Sometimes the process is “double-blind”, in that the reviewers don’t know who the author(s) are, nor the authors know who the reviewers are and sometimes it is only “blind” in that the author(s) don’t know who the reviewers are. In this way, the reviewers can be critical without fearing that they might suffer negative career consequences.

However, one problem with peer review worth noting here (although not explored below) is that peer reviewers criticisms can often be brutal because they are made under the protection of anonymity. I also think that the time pressures mean that peer reviewers don’t phrase their thoughts “nicely” because it simply takes too long and they don’t have such time to invest.

Double-blind reviewing is not always possible: it can be difficult to disguise authors’ identity since the research described in the paper might be known to peers, for example when only one or two labs have the specialist equipment used.

There’s more information on peer review over on the PhD Life blog, which explains what reviewers might be looking for and the possible outcomes of peer review. It also explains some of the other quality-related processes associated with scholarly journal publishing, such as corrections and retractions.

Peer review happens in other contexts too, such as the UK’s REF which has been heavily criticised as not the “gold standard” that it should be, because reviews of outputs were carried out by only British scholars, and that a paper might be read by only one reviewer in this process.

Another frequent peer review process is when research funding bids are reviewed and grants are awarded: panels are often made up of peers. I’ve done this and it’s a valuable experience that helps you to hit the right note in your own future funding applications, but it is also hard work, to read all the bids and try to do them all justice.

It sounds good, so why ask how good it is?

Journal publishing is always growing, and peer review is under pressure. A recent scam involving peer reviewing your own papers and its discovery is described by the Ottowa Citizen. Every year I read about papers that have been published in spite of journals’ quality filters. The Retraction watch website highlights stories of published scholarly articles that journals have retracted, i.e. the research findings described are not reliable.

Here are some of the flaws of the peer review process, in relation to journal articles.

1) It takes a very long time

I sense frustration about long journal turnaround times and peer review takes up quite a lot of that turnaround time. When you think about how much pressure there is on academics to write and to publish, how they get little recognition and no financial compensation for participating in the peer review process, how it is important to be seen to be the first to publish on something, and how scholarly work can be sooner built upon when it is published more quickly, it is no surprise to me that review times are not so fast.

2) It’s not efficient

If you submit to one journal and are peer reviewed and then rejected, you can then submit to another journal which might also put your article forward for peer review. Some people might call this redundant reviewing (since the work has already been done!) and it does add to the time-lag before research can be published and shared. As a response, there have been attempts to share reviewed papers, such as when your paper is rejected from one journal but it is suggested that you submit to another journal title by the same publisher instead.

3) Peers themselves get no credit or compensation for their work

There is a service called Rubriq that tries to address this criticism, and all of my points above. They offer a service to authors of having their papers independently reviewed, for a fee. They track the reviewers work in a way that allows them to demonstrate their contribution to the field through reviewing, and they also pay a fee to the reviewers, although this can also be waived by reviewers who can’t earn this way, and it is not thought to be the full value of the input supplied by reviewers.

Authors often suggest appropriate reviewers anyway, so if they supply an already reviewed paper to a journal, perhaps the editor might accept the process from this independent company. Rubriq have a network of journals that they work with.

4) Some articles don’t even reach peer review

A recent piece in Nature News summarises findings of research indicating that whilst journals are good at filtering out poor quality articles through peer review, the journals themselves were not so good at identifying the long-term highest cited papers. 12 out of the 15 most cited papers involved in the study were rejected at first, before finally making it to publication. Perhaps this is because, after rejection by peer review, articles were improved and re-submitted, so the system is working, although I think that the peer reviewers in such instances deserve credit for their contribution. However, this is to assume that the higher cited articles are in fact higher quality, which is not necessarily the case. (See below for a brief consideration of citations and bibliometrics.)

Rejection after peer review is one scenario. The other is also often called “desk rejection”, where an editor chooses which articles are rejected straight away, and which are sent to peer review. Editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines and not always on the quality of the research.

The message that I take from this is that authors whose papers are rejected can take heart, and keep improving their paper, and keep trying to get accepted for publication, but in trying to please editors and peer reviewers, we are potentially reinforcing biases.

5) Negative results are not published and not shared

This is another case of biases being perpetuated. There are concerns about the loss to scientific knowledge of negative findings, when a hypothesis was tested but not found to be proven. Such findings rarely make it into publication, because what journal editors and peer reviewers seek to publish is research which makes a high impact on scientific knowledge. And yet, if negative results are not reported then there is a risk that other researchers will explore in the same way and thus waste resources. Also, if research is replicated but not proven, this is potentially valuable to science because it could be that the already published work needs correcting. But the odds are stacked in favour of the original publication (it was already peer reviewed and accepted, after all), such that the replication might not be published. Science needs to be able to accommodate corrections, as the article I’ve linked to explains, and one response has been the emergence of journals of negative results.

What are the alternatives to traditional peer review?

I don’t suppose that my list is comprehensive, but it highlights things that I’ve come across recently and frequently, in this context.

John Iaonnides has written that most published research findings are false, and one answer could be replication. A measure based on replication could be useful to indicate the quality of research. But who wants to reproduce others’ research when all the glory (citations, research funding, stable employment) is in making new discoveries? And it’s not simple to replicate others’ studies: we’re often talking about years of work and investigation, using expensive and sophisticated machinery and quite often there will be different variables involved so for some research, it can never be quite an exact replication.

Post-publication peer review is another possible way to mark research out as high quality. I really like what F1000 are doing, and they explain more about the different ways that articles can be peer reviewed after having been published. I’m not sure that I want to rely on anonymous comments fields, although of course they can bring concerns to light and this is only one kind of “peer review”. I use quotation marks, because if the comments are anonymous, how do you know that they are from peers? But if the peer reviewers and their work are attributed, then I find this to be a really interesting way forward, because one of the pressures on peer review is the lack of acknowledgement, and the removal of anonymity is one way to do this.

I like the concept of articles being recommended into the F1000Prime collection: this is almost like creating a library, except that it’s not a librarian who is a filter but a scholarly community. In fact, many librarians’ selections come from suggestions by scholars anyway, so this is part way to a digital library. (Although I believe quite firmly that it is not a library, not least because access to the recommendations is restricted to paying members.) Anyway, a recommendation from a trusted source is another way to filter for quality. The issue then becomes, which sources do you trust? I blogged recently about recommendation systems that are used in more commercial settings.

I have to mention metrics! I’ll start with bibliometrics, which is usually measuring or scoring that relates to citations between journal articles or papers. For many, this is a controversial measure because there are many reasons why a paper might be cited, and not all of those reasons mean that the paper itself is of high quality. And indeed, there are many high quality papers which might not be highly cited, because their time has not yet come or because their contribution is to a field in which article publication and citation are not such common practice. The enormous growth in scholarly publication has meant that citation indices might also be criticised for too narrow a coverage,

In general, in the lead up to REF2014, researchers in the UK were keen not to be measured by bibliometrics, preferring to trust in peer review panels as a better way to evaluate their research. Yet citation indices allow you to order your search results by “most highly cited”. Would they do this if there was no interest in it as a measure of quality? Carol Tenopir has done some really interesting work in this area.

If you think that bibliometrics are controversial then altmetrics have provided some of the juiciest criticisms of all, being described as attention metrics. Yes, altmetrics as a “score” can be easily gamed. No, I don’t think that we should take the number of Facebook “likes” (or worse, a score based upon those and/or other such measures which is calculated in a mysterious way) to be an indicator of the quality of someone’s research. But, I think that reactions and responses to a published research article, as tracked by altmetric tools, can be enormously useful to the authors themselves. I’ve written about this already. Altmetrics require appropriate human interpretation: pay the scores too much attention and you will miss the real treasures that other people have also missed.

So how good is peer review, really?

It is a gold standard. It is what publishers do when time and resources allow. But it is not perfect and it is under pressure, and I’m really intrigued and impressed by all the innovative ways to ensure and indicate quality that are being explored. Of all the alternatives that I’ve discussed here, I’m most keen on the notion of open peer review, where it is not anonymous but accredited. This might be post publication or pre publication, but I’m keen that we should be able to follow peer reviewers’ and editors’ work.

A lot of these changes to scholarly publishing in the digital era seem to me to mean that the librarian’s role as a filter of information is pretty much at an end. But our role as a guide to sources and instructor of information literacy is ever more important. I would still teach budding researchers to consider peer reviewed works to be more likely to be high quality, but I would also say that they should apply their subject knowledge when reading the paper, and they should look out for other signs of quality or lack thereof. Peer review (and how rigorous it is) is one of a number of clues, and in that sense, nothing much has changed for librarians teaching information literacy, but we do have some interesting new clues to tell our students to watch out for.

12 Questions to ask, for basic clues on the quality of a journal

When choosing where to publish a journal article, what signs do you look out for? Here are some questions to ask or aspects to investigate, for clues.

1 – Is it peer reviewed? (Y/N and every nuance in between) See the journal’s website.
2- Who is involved in it? The editor & publisher? Are they well known & well thought of? Who has published articles there already: are these big players in your field? Read the journal!
3- Is it abstracted/indexed by one of the big sources in your field? (The journal’s website should tell you this. Big publishers also offer their own databases of house journals)
4- What happens when you search on Google for an article from the journal? Do you get the article in the top few results? And on GScholar?
5- Does it appear in Web of Science or Scopus journal rankings?
6- Take a look on COPAC: which big research libraries subscribe?
7- have a look at the UK’s published RAE2008 / forthcoming REF2014 data and see if articles from that journal were a part of the evidence submitted, and rated as 4*
8- Do the journal articles have DOIs? This is a really useful feature for promotion of your article, and it will mean that altmetric tools can provide you with evidence of engagement with your article.
9- Is there an open access option? (See SherpaRomeo) This is a requirement of many research funders, but it is also useful for you, when you want to promote your article.
10- Is it on the list of predatory OA journals? You might want to avoid those, although check for yourself. Note that some journals on the list are disputed/defended against the accusation of predation!
11- Is it listed on the ISSN centre’s ROAD: http://road.issn.org/ What does this tell you about it?
12- If you have access through a library subscription, is it listed on Ulrich’s periodicals directory? What does this tell you about it? Note the “peer review” symbol of a striped referee’s shirt: if the shirt is not there, it doesn’t necessarily mean that the journal is not peer reviewed: you may have to investigate further.
FURTHER NUANCES…
– What type of peer review is used? Is it rigorous? Is it useful to you, even if you get rejected?
– Time to rejection/acceptance: how soon do you need to be published?
– Acceptance/rejection rate
– Journal Impact Factor/ SJR score(s) /quartile for the field