Reflections and a simple round-up of Peer Review Week 2016

It has been Peer Review Week this week: I’ve been watching the hashtag on Twitter with interest (and linked to it in a blogpost for piirus.ac.uk) and on Monday I attended a webinar called “Recognising Review – New and Future Approaches or acknowledging the Peer Review Process”.

I do like webinars, as I’ve blogged before: professional development/horizon scanning from my very own desktop! This week’s one featured talks from Paperhive and Publons, amongst others, both of which have been explored on this blog in the past. I was particularly interested to hear that Publons are interested in recording not only peer review effort, but also editorial contributions. (Right at the end of the week this year, there have been suggestions that editorial work be the focus of next year’s peer review week so it seems to me that we’ve come full circle.) A question from the audience raised the prospect of a new researcher metric based on peer review tracking. I guess that’s an interesting space to watch!

I wondered where Peer Review Week came from: it seems to be a publisher initiative if Twitter is anything to go by: the hashtag is dominated by their contributions. On Twitter at least, it attracted some publisher criticism: if you deliberately look at ways to recognise peer review then some academics are going to ask whether it is right for publishers to profit so hugely from their free work. Some criticisms were painful to read and some were also highly amusing:

There were plenty of link to useful videos, webpages and infographics about how to carry out peer review, both for those new to it and for those already experienced, such as:

(On this topic, I thought that an infographic from Elsevier about reasons why reviewers refused to peer review was intriguing.)

Advice was also offered on how / how not to respond to peer reviews. My favourite:

And there were glimpses of what happens at the publisher or editor level:

There wasn’t much discussion of the issue of open vs blind or double blind peer review, which I found interesting because recognition implies openness, at least to me. And there was some interesting research reported on in the THE earlier this month, about eliminating gender bias through double blind reviews, so openness in the context of peer review is an issue that I feel torn about. Discussion on Twitter seemed to focus mostly on incentives for peer review, and I suppose recognition facilitates that too.

Peer Review Week has also seen one of the juiciest stories in scholarly communication: fake peer reviews! We’ve been able to identify so much dodgy practice in the digital age, from fake papers and fake authors to fake email addresses so that you can be your own peer reviewer and citation rings. Some of this is, on one level, highly amusing: papers by Maggie Simpson, or a co-author who is, in fact your cat. But on another level it is also deeply concerning, and so it’s a space that will continue to fascinate me because it definitely looks like a broken system: how do we stick it all together?

Event reporting: An Open Science meet-up in Berlin

Last week I went along to an Open Science meet-up here in Berlin. It was hosted at the Centre for Entrepreneurship at the Technische Universitaet and the theme of the evening was

Academic Papers: collaboration, writing & discovery

There were presentations from two interesting, freshly developed collaboration tools for researchers:
  1. Paperhive –  About having conversations about a paper, such that if you don’t understand something you can ask a question and someone else will answer it.  It doesn’t create copies of papers but allows you to search for them and when you view the paper through their interface, you see the comments. Collaborative reading!
  2. Authorea –  Tool for co-authoring a paper, which apparently works with LATEX and Google docs and other formats besides. “puts emphasis on collaboration and structured, visual editing.” Collaborative writing!
Discussion at the meeting was interesting: it was led by Alex from Paperhive, who evoked the “spirit of open science”, i.e. collaboration and sharing. And we all did share: if you’re interested in such themes then take a look at Twitter conversations with the #openscience hashtag, as of course some folks tweeted at the event!
I chatted to fellow freelancers and to researchers including Franzi, who is involved in a citizen science project at Berlin’s Natural History Museum, and also Sebastian who works for an open access publisher – of great sounding digital books – Language Science Press.
I was left reflecting on how data sharing can be achieved, as opening access to papers is one thing, but opening your data and your whole science is another… being open at the beginning about methodologies can help people to join disparate studies together and share the same methodology to make the results of their research more powerful. But as ever, being open is just the start of the process because you also have to make yourself heard! What channels are there for doing this? And of course, we all of researchers who won’t release data because they want to get another 5 papers out of it themselves. Yet who can blame them in the publish or perish climate? What we measure and incentivise researchers for can have damaging effects, not least the salami slicing of research that would be far more meaningfully written up in a single paper, instead of across 6! How can we make open data itself the output? Well, such themes are big and not for me to worry about, thank goodness. Last week was also the LIBER conference in Helsinki and there the library mangers and repository and publishing folks were very busy discussing data related themes. Once again, Twitter gives a flavour of the kind of things discussed there.

Quality checks beyond peer review? Retractions, withdrawals, corrections, etc

I often find myself reading/writing things about whether peer review is working or not, the opportunities for post publication peer review and about the changes needed in scholarly communication. An article in the THE earlier this year described a “secret dossier on research fraud” and the concerns it expresses are familiar, although I balk at the word “fraud”.  The THE article/its source claims that:

scientists and journals are extremely reluctant to retract their papers, even in the face of damning evidence

Perhaps the scientists don’t completely understand the processes that publishers use, nor indeed feel able to influence the consequences to their reputations which they must maintain in order to stand a chance of winning the next research grant and remain employed. I used to give workshops to budding researchers on “how to get published”, when I would explain something of the publishing process to them, and my final slide was all about corrections, errata and retractions: what is the difference between them, and why and how do they occur? (Quick answers below!) Even if the reason for retraction should bring no shame, but honour for admitting a mistake, researchers still don’t want to have an article retracted.

Perhaps in the days of print there was even more reason for stringency in avoiding post-publication alterations: after all, the version of record, the print article, would have been impossible to correct and researchers could only be alerted to any retractions or corrections through metadata records and, perhaps if they were avid readers of a journal then they might spot notices in later editions. However, I do wonder if, in the digital world, there is more room for post-publication alterations without shame, in the name of improving science. This is why it is important for researchers and publishers to work together to define the different categories of such alterations and what do they mean for a researcher’s reputation? There is a lack of clarity, which I think stems partially from a variety of practice with different journals, publishers or even database providers in how they describe and handle the various circumstances in which post-publication alterations are needed.

Corrections, corrigenda and errata are used by journals for minor corrections to a published work, eg name of an author was mis-spelled, or a title not properly capitalised, or also for a minor error in an amount mentioned, eg dosage. These are published in later issues in print, added to metadata records in the digital sphere, and also usually visible in the digital full text with a note in brackets after the corrected item. As a librarian, I’m interested in how this sort of information is transferred in metadata records: the U.S. National Library of Medicine website describes how these are usually all referred to as Errata in PubMed, and their page about this goes on to explain and categorise many different types of t

For me, these are a very good reason to ensure that you read the final published version of an article that you intend to cite: the green OA pre-print version of an article is useful for you to understand the work, but not the one I recommend citing.

Retractions are when an article is withdrawn: this is something that you can do as the author, or indeed your institution could do it on your behalf (sometimes also called a withdrawal, see below), or the editor or publisher of a journal can retract an article. Reasons for retraction of an article include a pervasive (but honest) error in the work, or sometimes might be for unethical practice. I can’t recommend the RetractionWatch blog highly enough for examples and stories of retractions. Sometimes you also hear about a partial retraction which might occur when only one figure or part of the conclusions is withdrawn, whilst the rest of the paper is sound.

Withdrawals are when a paper is no longer included in a publication, often when it has accidentally been published twice. I am increasingly hearing of fees being charged to authors for a withdrawal. Publishers usually have policies about what they consider to be grounds for a withdrawal: see Elsevier’s explanation of withdrawals and retractions, for example.

My explanations are a very light-touch introduction to the subject: publishers’ guidance will give you more of an idea about what might happen to your own articles, but I do see a variety of terminology and practice. My advice to academics is to never make assumptions that work which has been corrected or retracted is necessarily suspect, nor that it should affect a researcher’s reputation unless the whole story is known. Just like the reason why we can’t take bibliometric or altmetric scores as the whole picture of an academic’s worth: we always need context. If we all did this, then there would be no reason for authors to resist retraction, but I know that that is an ideal. Hence the story in the THE which I began with…

 

 

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Rejections, revisions, journal shopping and time… more and more time

I read a great news item from Nature, called “Does it take too long to publish research?” and wanted to highlight it here. In  particular, I thought that early career researchers might relate to the stories of featured researchers’ multiple rejections: there is some consolation in hearing others’ experiences. (Recently rejected authors might also seek advice in a great piece from The Scientist in 2015: Riding out rejection.) Also, I wanted to write my reflections, identifying some reasons for rejection (these appear in bold, throughout, in case you want to scan for them).

Whilst I’m on the topic of rejection stories: a recent episode of Radio 4’s The Life Scientific featured Peter Piot, who described (if I understood correctly) how difficult it was to get his research on HIV published in the 1980s because it was so groundbreaking that reviewers could not accept it. He knew that his findings were important and he persevered. So that could be one reason for rejection: you’re ahead of your field!

(Peter Piot also described his time working for the United Nations, in what was essentially a break from his academic career: if you’re interested in academic career breaks then you could take a look at the Piirus blog!)

Anyway, back to the Nature news item, where I picked up particular themes:

  1. Authors will have been rejected a number of times before they are even peer reviewed: a “desk rejection”. One of the authors featured was glad to finally get revisions after so many rejections without explanation. Without explanation, we can’t know what the editors’ decisions were based on, but as I noted in an earlier post, editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines.
  2. Journals do report on time to publication, but that doesn’t always include the time you’ve spent on revisions. If you resubmit after making revisions then the clock is re-started at the resubmission date, at some journals. Likewise, I have read (or heard: sorry, I can’t find the link) elsewhere that the reported rejection/acceptance rates don’t count papers which are invited for re-submission with revisions, as a rejection. So you might feel rejected when you have to make so many revisions but in statistical terms your paper has not been rejected (yet!). There is still time for it to be rejected after you have resubmitted, of course, and that probably happens more often than you think. Some think that journals are not counting and reporting fairly and I think there is room for improvement but it’s a complex area.
  3. Top journals can afford to be more picky and so the bar seems to have been raised, in terms of requirements for publication (hence increased numbers of authors per paper, who bring more data between them). As the Nature news item says: “Scientists grumble about overzealous critics who always seem to want more, or different, experiments to nail a point.”
  4. Rejections could be as a result of the authors “journal shopping”, whereby they submit to top/high impact journals first and work down a list. This is possibly due to a reliance on the reputation and impact factor of the journal where an article is published by those who hire and fund researchers. Researchers who target journals in the middle range of impact factor seem to stand the best chance of a quick review turnaround, but it seems that researchers are taking the risk of rejection and slower publication in order to stand a chance of appearing in a top journal.
  5. Journal editors and publishers are trying to ensure that the publication process is not slowed down, wherever possible. I’d like to quote one nice example of such attempts: “In 2009, Cell also restricted the amount of supplemental material that could accompany papers as a way to keep requests for “additional, unrelated experiments” at bay.” However, the Nature News item also points out the increased volume of papers to be processed and additional checks that papers might go through these days, for example plagiarism screens, animal welfare reports, competing interest disclosures, etc. Plagiarism screens can be tough: I remember an author telling me about how his paper was rejected for what amounted to self-plagiarism.
  6. The peer review process does take time and at different journals this process might be quicker or slower, but even though (as I’ve previously blogged) there are pressures on peer review system, it is not taking longer than it used to, on average. Neither has the digital world sped it up. The News item goes on to recount some of the innovations around peer review that various journals and publishers are implementing.

This made me think that there’s got to be a project somewhere, for someone to classify the revisions asked for in peer review processes and then count which is the most common. Reasons in my list so far:

  • poorly/not succinctly written (i.e. not intelligible!)
  • too little explanation/text
  • abstract does’t reflect findings
  • ethical issues with the data presented
  • ethical issues with the method
  • method unsuited to question
  • conclusions are over-reaching
  • needs to be set in context of other (specific/non-specific) research & add citations

These could be areas to be revised or indeed, reasons for rejection. I’m sure that there are more issue types and that my list is not complete, so feel free to share some more in the comments.

I know that some authors take the revision suggestions and do not resubmit to the journal that reviewed their article, but withdraw their article from that journal and then submit to one lower on the prestige list, thereby perhaps side-stepping another rejection. And thereby apparently achieving publication more quickly, for the second (or fifth or fifteenth) choice journal could not know of the time that an article spent, awaiting the verdict of a different journal. Perhaps that is why journals prefer to count their publication time from the date of resubmission: they don’t know either, if an article will ever be resubmitted. And is it fair of an author to use a journal’s peer review process to polish their article, but not actually publish with that journal? A complex area, like I said already.

Well, if all this complexity has put you in need of cheering up, then I must recommend the Journal of Universal Rejection to you. If you don’t laugh then you might cry…

Publish then publicise & monitor. Publication is not the end of the process!

Once your journal article or research output has been accepted and published, there are lots of things that you can do to spread the word about it. This blogpost has my own list of the top four ways you could do this (other than putting it on your CV, of course). I also recommend any biologists or visual thinkers to look at:
Lobet, Guillaume (2014): Science Valorisation. figsharehttp://dx.doi.org/10.6084/m9.figshare.1057995
Lobet describes the process as “publish: identify yourself: communicate”, and points out useful tools along the way, including recommending that authors identify themselves in ORCID, ResearchGate, Academia.edu, ImpactStory and LinkedIn. (Such services can create a kind of online, public CV and my favourite for researchers is ORCID.) You may also find that your publisher offers advice on ways to publicise your paper further.

PUBLICISE

1) Talk about it! Share your findings formally at a conference. Mention it in conversations with your peers. Include it in your teaching.

2) Tweet about it! If you’re not on Twitter yourself (or even if you are!) then you could ask a colleague to tweet about it for you. A co-author or the journal editor or publisher might tweet about it, or you could approach a University press officer. If you tweet yourself then you could pin the tweet about your latest paper to your profile on Twitter.

3) Open it up! Add your paper to at least one Open Access repository, such as your institutional repository (they might also tweet about it). This way your paper will be available even to those who don’t subsribe to the journal. You can find an OA repository on ROAR or OpenDOAR. Each repository will have its own community of visitors and ways in which to help people discover your content, so you might choose more than one repository: perhaps one for your paper and one for data or other material associated with it. If you put an object into Figshare, for example, it will be assigned a DOI and that will be really handy for getting Altmetrics measures.

4)Be social! Twitter is one way to do this already, of course. but you could also blog about it, on your own blog or perhaps as a guest post for an existing blog with a large audience already. You could put visual content like slides and infographics into Slideshare, and send out an update via LinkedIn. Choose at least one more social media channel of your choice, for each paper.

MONITOR

  1. Watch download stats for your paper, on your publisher’s website. Measuring the success of casual mentions is difficult, but you can often see a spike in download statistics for a paper, after it has been mentioned at a conference.
  2. Watch Twitter analytics: is your tweet about your paper one of your Top Tweets? You can see how many “engagements” a tweet has, i.e., how many clicks, favourites, re-tweets and replies, etc it accrued. If you use a link shortening service, you should also be able to see how many clicks there have been on your link, and where from. (bit.ly is one of many such shortening services.) This is the measure that I value most. If no-one is clicking to look at your content, then perhaps Twitter is not working for you and you could investigate why not or focus on more efficient channels.
  3. Repositories will often offer you stats about downloads, just like your publisher, and either or both may offer you access to an altmetrics tool. Take a look at these to see more information behind the numbers: who is interested and engaged with your work and how can you use this knowledge? Perhaps it will help you to choose which of the other possible social media channels you might use, as this is where there are others in your discipline who are already engaged with your work.

 

Ultimately, you might be interested in citations rather than engagements on Twitter or even webpage visits or downloads for your paper. It’s hard to draw a definite connection between such online activity and citations for journal papers, but I’m pretty sure that no-one is going to cite your paper if they don’t even know it exists, so if this is important to you, then I would say, shout loud!

Ensuring quality and annotating scientific publications. A summary of a Twitter chat

Screenshot of twitter conversation
Tweet tweet!

Last year (yes, I’m slow to blog!), I had a very productive conversation (or couple of conversations) on Twitter with a former colleague & scientist at the University of Warwick, Andrew Marsh, which are worth documenting here as a way to give them a narrative, and to illustrate how Twitter sometimes works.

Back in November 2015, Andrew tweeted to ask who would sign reviews of manuscripts, when reporting on a presentation by Chief Editor of Nature Chemistry,  Stuart Cantrill. I replied on Twitter by asking whether such openness would make the reviewers take more time over their reviews (thereby slowing peer review down). I wondered whether openness would make reviewers less direct and so therefore possibly less helpful as more open to interpretation. Also, whether such open criticisim would drive authors to engage in even more “pre-submission”, informal peer reviewing.

Andrew tells me that, at the original event “a show of hands and brief discussion in the room revealed that PIs or those who peer reviewed manuscripts regularly, declared themselves happy to reveal their identity whereas PhD students or less experienced researchers felt either unsure or uncomfortable in doing so.”

Our next chat was kick-started when Andrew pointed me to a news article from Nature that highlighted a new tool for annotating web pages, Hypothes.is. In our Twitter chat that ensued we considered:

  1. Are such annotations a kind of post-publication peer review? I think that they can work alongside traditional peer review, but as Andrew pointed out, they lack structure so they’re certainly no substitute.
  2. Attribution of such comments is important so that readers would know whose comments they are reading, and also possibly enable tracking of such activity, so that the work could be measured. Integration with ORCID would be a good way to attribute comments. (This is already planned, it seems: Dan Whaley picked up on our chat here!)
  3. Andrew wondered whether tracking of such comments could be done for altmetrics. Altmetric.com responded. Comments on Hypothes.is could signal scholarly attention for the work which they comment on, or indeed attract attention themselves. It takes a certain body of work before measuring comments from such a source becomes valuable, but does measuring itself incentivise researchers to comment? I’m really interested in the latter point: motivation cropped up in an earlier blogpost of mine on peer review. I suspect that researchers will say that measurement does not affect them, but I’m also sure that some of those are well aware of, eg their ResearchGate score!
  4. Such a tool offers a function similar to marginalia and scrawls in library books. Some are helpful shortcuts (left by altruists, or just those who wanted to help their future selves?!), some are rubbish (amusing at their best), and sometimes you recognise the handwriting of an individual who makes useful comments, hence the importance of attribution.
  5. There are also some similarities with social bookmarking and other collaboration tools online, where you can also publish reviews or leave comments on documents and publications.

And who thought that you couldn’t have meaningful conversations on Twitter?! You can also read responses on Twitter to eLife‘s tweet about its piece on the need for open peer review.

The best part of this conversation between Andrew and me on Twitter was the ability to bring in others, by incorporating their Twitter handles. We also picked up on what others were saying, like this tweet about journal citation distributions from Stephen Curry. The worst parts were trying to be succinct when making a point (and wanting to develop some points); feeling a need to collate the many points raised and forgetting to flag people sometimes.

Just as well you can also blog about these things, then!

 

Who was at the Frankfurt book fair?

Many international publishersI recently wrote about three particularly German things that I spotted at the Frankfurt book fair, but there was so much there so here is another blogpost full of pictures… Here is a quick run-through of who I spotted at the book fair, with photos!

Of course, the Frankfurt book fair is huge: the exhibition space is much bigger than Online Information, or the UKSG conference, which is the closest thing to it that I’ve attended in the past. And it is more properly called the International Frankfurt Book Fair! Some international publishers were to be found in the halls for their country, where you could hear their language being spoken all around, whilst others were scattered in other halls matching their content rather than their nation, like this one in the academic publishing hall.

Specialist book publishers
Specialist book publishers

There was a great deal of variety of types of book represented at the fair, and all things book related. Those seeking something special could find beautiful facsimiles, or antique works, but the section of the fair dedicated to antiquities was guarded by extra security: you had to leave coats and bags behind to go in, so I didn’t: after all, I’m not in a position to invest in or be guardian of such treasures, and there was so much else to see.

Another area of the fair that had extra security was a hall that was apparently new for this year, where literary agents gathered for pre-booked meetings only. I wonder what was going on behind those screens? Agents selling books to publishers and negotiating terms, I imagine. The whole fair has the atmosphere of high-stakes deals, and people going about important business, not just in the exhibition halls but all around the site. There were publishers doing deals with libraries and bookshops, and technology providers with services for the publishers or with products for readers directly. There were education tool providers, and also companies who sell all the extras that you can find in bookshops like stationery and gifts: many of these stalls were making individual item sales at the book fair, too, so you could pick up a present for your loved ones.

Not just books: gift providers, too
Not just books: gift providers, too

I spent most of my time in the hall for scientific and academic publishing, but I did walk through other halls, and spotted many art publishers and stalls for children’s books and comic books which had some highly creative and attractive displays: these were really inspiring and made me feel proud to be a part of this information world, with just a little pang of regret that the academic world is so much less aesthetic and so much more serious looking! Ah well, the academic information world is full of really interesting challenges, and I was really pleased to see that a German Library school was amongst the stalls in the education area, recruiting students to degree programmes in librarianship and information science.

Publishers of children's books
Publishers of children’s books

There was so much to see, across so many different enormous conference halls that it was quite possible to be lost in the indoors world of the exhibition centre, and to forget the world outside… sometimes it seemed as though the whole world was at the Frankfurt book fair!

 

A rare glimpse of the outside world, from within the Exhibition centre at Frankfurt.
A rare glimpse of the outside world, from within the Exhibition centre at Frankfurt.

 

 

Peer review motivations and measurement

Yesterday’s blogpost by David Crotty on Scholarly Kitchen, outlines the problems with the notion of giving credit for peer review. It is very thought provoking, although I’m personally still keen to see peer review done in the open, and to explore the notion of credit for peer review some more. For me the real question is not whether to measure it, but how best to measure it and what value to set on that measure.

Both the blogpost and its comments discuss researchers’ current motivation for carrying out peer review:

  • To serve the community & advance the field (altruism?)
  • To learn what’s new in the field (& learn before it is published, i.e. before others!)
  • To impress editors/publishers (& thereby increase own chances of publication)
  • To contribute to a system in which their own papers will also benefit (self interest?)

Crotty writes that problems in peer review would arise from behavioural change amongst researchers if we change their motivation such that they will chase credit points. He poses some very interesting questions, including:

How much career credit should a researcher really expect to get for performing peer review?

I think that’s a great question! However, I do think that we should investigate potential ways to give credit for peer review. I’ve previously blogged about the problems with peer review and followed up on those thoughts and I’ve no doubt that I’ll continue to give this space more thought: peer review is about quality, and as a librarian at heart, I’m keen that we have good quality information available as widely as possible.

In David Crotty’s post I am particularly concerned by the notion that researchers, as currently intrinsically motivated, will be prepared to take on higher workloads. I don’t want that for researchers: they are already under enormous amounts of pressure. Not all academics can work all waking hours. Some actually do (at least some of the time), I know, but presumably someone else cleans and cooks for them (wives? paid staff?), and even if all researchers had someone to do that for them, it’s not fair to the researchers or even good for academia, to comprise such isolated individuals.

One commenter makes the point that all peer reviews are not alike and that some might take a day, some 20 minutes, so if credit is to be given along the lines of how many reviews someone has carried out, well this won’t be quite fair. And yet, as Crotty argued in his blogpost, if you complicate your measurement then it’s really overkill because no-one really cares to know more than a simple count. Perhaps that’s a part of what needs fixing with peer review: a little more uniformity of practice. Is it fair to the younger journals (probably with papers from early career researchers who don’t trust themselves to submit to the journal giants) that they get comparatively cursory time from peer reviewers?

Another comment mentions that the current system favours free riding: not everyone carries out peer review, even though everyone benefits from the system. The counterpoint to this is in another comment which points out that there is already a de facto system of credit, in that journal editors are aware of who is carrying out peer review, and they wield real power, reviewing papers and sitting on funding panels. I’m not sure that I’d want to rely on a busy editor’s memory to get the credit I deserved, but the idea reminded me of how the peer review system has worked up until now, and the issue seems to be that the expanding, increasingly international research and publishing community is no longer as close-knit as it once was.

There is a broader issue here. Crotty suggested that university administrators would not want researchers to take the time to do peer review, but to do original research all the time since that’s what brings in the money and the glory. But in order to be a good researcher (and pull in the grant funding), one has to read others’ papers, and be aware of the direction of research in the field. Plus, review papers are often more highly cited than original research papers, so surely those administrators will want researchers who produce review papers and pull in the citations? Uni rankings often use bibliometric data, and administrators do care about those!

What we’re really talking about, is ‘how to measure researchers’ performance’, and perhaps peer review (if openly measured) is a part of that but perhaps also not. I like the notion of some academics becoming expert peer reviewers, whilst others are expert department/lab leaders or grant writers, or authors or even teachers. We all have different strengths and perhaps it’s not realistic to expect all of our researchers to do everything, but if you want a mixture in your team then you need to know who is doing what.

I’d like to finish with Kent Anderson’s thoughtful comment about retaining excellent reviewers:

Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer).

Let’s think on it some more…