Not enough time for reading in academia: can we measure it?

I wanted to explore a topic which has been popular on Twitter, at least amongst the tweets I saw over the summer: that of academics struggling to find the time to read. I’ve written this blogpost in something of a “summer exploration” spirit, since I connected this topic with my interest in bibliometrics.

During the summer there were many  mentions of the importance of reading in academia, on Twitter. Reading of any kind is important for training our minds to think. It’s important for training our own ability with words, our writing skills. And it’s important for keeping uptodate with academic discoveries and developments in fields of interest, to name but a few advantages of reading. Pat Thomson is eloquent on the matter.

As a librarian by background, of course I’m a big fan of reading! But I see how pressure on scholars and researchers to publish, to bring in research grants and to contribute to other activities that are measured in performance evaluations and university rankings might actually be causing them to read less. I may be doing researchers a disservice to suggest that they are reading less, but I’m being sympathetic. Carol Tenopir’s 2014 research into reading via questionnaires and academics’ self-reporting is outlined on the Scholarly Kitchen blog: at first it did look like there was a decline in reading, but in the end the research might only indicate that a plateau was reached, at a time when the volume of content being published is increasing. This might make some scholars feel that they are unable to keep up with their field.

My provocative thought goes like this: If focussing on publication outputs and measuring them via bibliometrics has led to a lack of reading time (which I’m a long way off proving), then perhaps the solution is to also measure (and give credit for) time invested in reading!

Disciplinary differences are at the core of academic reading habits, evidenced by studies of library impact on students, among others. Such studies have involved attempts to correlate student grades with library accesses, as explored in this 2015 paper :

Here there is some correlation of “quality” academic performance and library accesses, although the main conclusion seems to be the importance of the library when it comes to student retention. I also remember reading Graham Stone’s earlier work (cited in the paper above), and the importance of data protection issues. These studies identify cohorts of students rather than individuals and their grades due to ethical (and legal) concerns which apply when it comes to researchers, too.

We must also remember that much content is not digital, or not in the library, whether physical or online. Increasingly, scholarly content is available online via open access, so we don’t need to be identifiably logged in to read it. And indeed, Tenopir’s later work reminds us that content once downloaded can be re-read or shared, outside of the publisher or library platforms. Automatically measuring reading to any degree of accuracy becomes possible only if you dictate how and where academic reading is to be done. Ethical concerns abound!

Instead of measuring time spent reading or volumes of content downloaded or accessed by researchers, perhaps we could give credit to researchers who cite more. After all, citations are an indication that the authors have read a paper, aren’t they? OK, I am being prococative again: how do we know which co-authors have read which of the cited papers? How do we know that a cited paper is one that has been read in full: what if the pre-print has been read rather than the version of record, or only the abstract? Such doubts about what it means to read a paper are expressed in the comments of the Scholarly Kitchen post mentioned earlier.

Actually, we could say that reading and citations are already indirectly assessed, because we evaluate written outputs and publications, and their quality reflects the amount and quality of reading behind them. I think that’ll have to do, because the more I read about academic reading, the more I think we can’t know! How we evaluate the outputs is another matter, of course. I’ve blogged about peer review, but not article level metrics – yet.

I tried to track down Tenopir’s published paper, based on the self-reported questionnaire research critiqued on the Scholarly Kitchen. I think it must be the paper entitled “Scholarly article seeking, reading, and use: a continuing evolution from print to electronic in the sciences and social sciences” The critiquing all occurred before the paper was published, so direct links weren’t provided. Research into how much researchers are reading, whether based on downloads or questionnaires can illustrate disciplinary differences, or signal changes in research practice over time. Tenopir and her co-authors shed light on this, and opened more questions to be answered. I wonder whether researchers could be persuaded to allow tracking software to spy on their reading habits for a limited period… there is much more to be explored in this area but I’m sure that we won’t gain trust by suggesting reading metrics!

Image credit: CC0 Pixabay.

 

Advertisements