Quality checks beyond peer review? Retractions, withdrawals, corrections, etc

I often find myself reading/writing things about whether peer review is working or not, the opportunities for post publication peer review and about the changes needed in scholarly communication. An article in the THE earlier this year described a “secret dossier on research fraud” and the concerns it expresses are familiar, although I balk at the word “fraud”.  The THE article/its source claims that:

scientists and journals are extremely reluctant to retract their papers, even in the face of damning evidence

Perhaps the scientists don’t completely understand the processes that publishers use, nor indeed feel able to influence the consequences to their reputations which they must maintain in order to stand a chance of winning the next research grant and remain employed. I used to give workshops to budding researchers on “how to get published”, when I would explain something of the publishing process to them, and my final slide was all about corrections, errata and retractions: what is the difference between them, and why and how do they occur? (Quick answers below!) Even if the reason for retraction should bring no shame, but honour for admitting a mistake, researchers still don’t want to have an article retracted.

Perhaps in the days of print there was even more reason for stringency in avoiding post-publication alterations: after all, the version of record, the print article, would have been impossible to correct and researchers could only be alerted to any retractions or corrections through metadata records and, perhaps if they were avid readers of a journal then they might spot notices in later editions. However, I do wonder if, in the digital world, there is more room for post-publication alterations without shame, in the name of improving science. This is why it is important for researchers and publishers to work together to define the different categories of such alterations and what do they mean for a researcher’s reputation? There is a lack of clarity, which I think stems partially from a variety of practice with different journals, publishers or even database providers in how they describe and handle the various circumstances in which post-publication alterations are needed.

Corrections, corrigenda and errata are used by journals for minor corrections to a published work, eg name of an author was mis-spelled, or a title not properly capitalised, or also for a minor error in an amount mentioned, eg dosage. These are published in later issues in print, added to metadata records in the digital sphere, and also usually visible in the digital full text with a note in brackets after the corrected item. As a librarian, I’m interested in how this sort of information is transferred in metadata records: the U.S. National Library of Medicine website describes how these are usually all referred to as Errata in PubMed, and their page about this goes on to explain and categorise many different types of t

For me, these are a very good reason to ensure that you read the final published version of an article that you intend to cite: the green OA pre-print version of an article is useful for you to understand the work, but not the one I recommend citing.

Retractions are when an article is withdrawn: this is something that you can do as the author, or indeed your institution could do it on your behalf (sometimes also called a withdrawal, see below), or the editor or publisher of a journal can retract an article. Reasons for retraction of an article include a pervasive (but honest) error in the work, or sometimes might be for unethical practice. I can’t recommend the RetractionWatch blog highly enough for examples and stories of retractions. Sometimes you also hear about a partial retraction which might occur when only one figure or part of the conclusions is withdrawn, whilst the rest of the paper is sound.

Withdrawals are when a paper is no longer included in a publication, often when it has accidentally been published twice. I am increasingly hearing of fees being charged to authors for a withdrawal. Publishers usually have policies about what they consider to be grounds for a withdrawal: see Elsevier’s explanation of withdrawals and retractions, for example.

My explanations are a very light-touch introduction to the subject: publishers’ guidance will give you more of an idea about what might happen to your own articles, but I do see a variety of terminology and practice. My advice to academics is to never make assumptions that work which has been corrected or retracted is necessarily suspect, nor that it should affect a researcher’s reputation unless the whole story is known. Just like the reason why we can’t take bibliometric or altmetric scores as the whole picture of an academic’s worth: we always need context. If we all did this, then there would be no reason for authors to resist retraction, but I know that that is an ideal. Hence the story in the THE which I began with…

 

 

Advertisements

Ensuring quality and annotating scientific publications. A summary of a Twitter chat

Screenshot of twitter conversation
Tweet tweet!

Last year (yes, I’m slow to blog!), I had a very productive conversation (or couple of conversations) on Twitter with a former colleague & scientist at the University of Warwick, Andrew Marsh, which are worth documenting here as a way to give them a narrative, and to illustrate how Twitter sometimes works.

Back in November 2015, Andrew tweeted to ask who would sign reviews of manuscripts, when reporting on a presentation by Chief Editor of Nature Chemistry,  Stuart Cantrill. I replied on Twitter by asking whether such openness would make the reviewers take more time over their reviews (thereby slowing peer review down). I wondered whether openness would make reviewers less direct and so therefore possibly less helpful as more open to interpretation. Also, whether such open criticisim would drive authors to engage in even more “pre-submission”, informal peer reviewing.

Andrew tells me that, at the original event “a show of hands and brief discussion in the room revealed that PIs or those who peer reviewed manuscripts regularly, declared themselves happy to reveal their identity whereas PhD students or less experienced researchers felt either unsure or uncomfortable in doing so.”

Our next chat was kick-started when Andrew pointed me to a news article from Nature that highlighted a new tool for annotating web pages, Hypothes.is. In our Twitter chat that ensued we considered:

  1. Are such annotations a kind of post-publication peer review? I think that they can work alongside traditional peer review, but as Andrew pointed out, they lack structure so they’re certainly no substitute.
  2. Attribution of such comments is important so that readers would know whose comments they are reading, and also possibly enable tracking of such activity, so that the work could be measured. Integration with ORCID would be a good way to attribute comments. (This is already planned, it seems: Dan Whaley picked up on our chat here!)
  3. Andrew wondered whether tracking of such comments could be done for altmetrics. Altmetric.com responded. Comments on Hypothes.is could signal scholarly attention for the work which they comment on, or indeed attract attention themselves. It takes a certain body of work before measuring comments from such a source becomes valuable, but does measuring itself incentivise researchers to comment? I’m really interested in the latter point: motivation cropped up in an earlier blogpost of mine on peer review. I suspect that researchers will say that measurement does not affect them, but I’m also sure that some of those are well aware of, eg their ResearchGate score!
  4. Such a tool offers a function similar to marginalia and scrawls in library books. Some are helpful shortcuts (left by altruists, or just those who wanted to help their future selves?!), some are rubbish (amusing at their best), and sometimes you recognise the handwriting of an individual who makes useful comments, hence the importance of attribution.
  5. There are also some similarities with social bookmarking and other collaboration tools online, where you can also publish reviews or leave comments on documents and publications.

And who thought that you couldn’t have meaningful conversations on Twitter?! You can also read responses on Twitter to eLife‘s tweet about its piece on the need for open peer review.

The best part of this conversation between Andrew and me on Twitter was the ability to bring in others, by incorporating their Twitter handles. We also picked up on what others were saying, like this tweet about journal citation distributions from Stephen Curry. The worst parts were trying to be succinct when making a point (and wanting to develop some points); feeling a need to collate the many points raised and forgetting to flag people sometimes.

Just as well you can also blog about these things, then!

 

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

Peer review motivations and measurement

Yesterday’s blogpost by David Crotty on Scholarly Kitchen, outlines the problems with the notion of giving credit for peer review. It is very thought provoking, although I’m personally still keen to see peer review done in the open, and to explore the notion of credit for peer review some more. For me the real question is not whether to measure it, but how best to measure it and what value to set on that measure.

Both the blogpost and its comments discuss researchers’ current motivation for carrying out peer review:

  • To serve the community & advance the field (altruism?)
  • To learn what’s new in the field (& learn before it is published, i.e. before others!)
  • To impress editors/publishers (& thereby increase own chances of publication)
  • To contribute to a system in which their own papers will also benefit (self interest?)

Crotty writes that problems in peer review would arise from behavioural change amongst researchers if we change their motivation such that they will chase credit points. He poses some very interesting questions, including:

How much career credit should a researcher really expect to get for performing peer review?

I think that’s a great question! However, I do think that we should investigate potential ways to give credit for peer review. I’ve previously blogged about the problems with peer review and followed up on those thoughts and I’ve no doubt that I’ll continue to give this space more thought: peer review is about quality, and as a librarian at heart, I’m keen that we have good quality information available as widely as possible.

In David Crotty’s post I am particularly concerned by the notion that researchers, as currently intrinsically motivated, will be prepared to take on higher workloads. I don’t want that for researchers: they are already under enormous amounts of pressure. Not all academics can work all waking hours. Some actually do (at least some of the time), I know, but presumably someone else cleans and cooks for them (wives? paid staff?), and even if all researchers had someone to do that for them, it’s not fair to the researchers or even good for academia, to comprise such isolated individuals.

One commenter makes the point that all peer reviews are not alike and that some might take a day, some 20 minutes, so if credit is to be given along the lines of how many reviews someone has carried out, well this won’t be quite fair. And yet, as Crotty argued in his blogpost, if you complicate your measurement then it’s really overkill because no-one really cares to know more than a simple count. Perhaps that’s a part of what needs fixing with peer review: a little more uniformity of practice. Is it fair to the younger journals (probably with papers from early career researchers who don’t trust themselves to submit to the journal giants) that they get comparatively cursory time from peer reviewers?

Another comment mentions that the current system favours free riding: not everyone carries out peer review, even though everyone benefits from the system. The counterpoint to this is in another comment which points out that there is already a de facto system of credit, in that journal editors are aware of who is carrying out peer review, and they wield real power, reviewing papers and sitting on funding panels. I’m not sure that I’d want to rely on a busy editor’s memory to get the credit I deserved, but the idea reminded me of how the peer review system has worked up until now, and the issue seems to be that the expanding, increasingly international research and publishing community is no longer as close-knit as it once was.

There is a broader issue here. Crotty suggested that university administrators would not want researchers to take the time to do peer review, but to do original research all the time since that’s what brings in the money and the glory. But in order to be a good researcher (and pull in the grant funding), one has to read others’ papers, and be aware of the direction of research in the field. Plus, review papers are often more highly cited than original research papers, so surely those administrators will want researchers who produce review papers and pull in the citations? Uni rankings often use bibliometric data, and administrators do care about those!

What we’re really talking about, is ‘how to measure researchers’ performance’, and perhaps peer review (if openly measured) is a part of that but perhaps also not. I like the notion of some academics becoming expert peer reviewers, whilst others are expert department/lab leaders or grant writers, or authors or even teachers. We all have different strengths and perhaps it’s not realistic to expect all of our researchers to do everything, but if you want a mixture in your team then you need to know who is doing what.

I’d like to finish with Kent Anderson’s thoughtful comment about retaining excellent reviewers:

Offering credit and incentives aimed at retaining strong reviewers is different from creating an incentives system to make everyone a reviewer (or to make everyone want to be a reviewer).

Let’s think on it some more…

Further thoughts on Peer Review & speeding up traditional journal publication

Back in January, I wrote about Peer Review. It’s a big topic! Here are some more reflections, following on from my last blog post about it.

Speeding things up, in journal article publication. (On “Peer review takes a very long time”)

picture of a pocket watch

I wrote that peer review “takes a very long time” because many scholars want to get their work out there to be read, as soon as possible. Of course, this is a loose concept and “a very long time” is relative. Some might think that I am criticising publishers for being slow, but I’m not pointing the finger of blame! I know that publishers have been addressing the issue and peer review has sped up in recent times, especially since there is now software to can help track it: SPARC has a handy round-up of manuscript submission software. However, the peer reviewers themselves must respond and they are under a lot of pressure. The system can only be as fast as the slowest reviewer, and there are all sorts of (entirely understandable) circumstances that might slow an individual down.

I should take a look at some of the developments that have helped to speed up traditional scholarly communication, though:

Scholarly publishers have invested in initiatives like Sage’s OnlineFirst to help peer reviewed research articles to reach audiences before journal issues are complete, thus cutting publication waiting periods.

Some publishers have also introduced mega journals with cascading peer review systems, which are also often based on Gold Open Access. Impact Story’s blog has a great post about how authors can make the most of these types of journal.  These speed up an article’s time to publication because after a peer review that led to rejection from one title, your paper can get fast-tracked through to publication in the next “tier” title at the same publisher, without the need to submit again and start the process from the very beginning.

And of course, as a librarian I should mention the sophisticated alerting services that help researchers to find out about each others’ papers as soon as possible: researchers are no longer dependent on the print copy landing on their desk, and finding the time to browse through the table of contents!

Putting it online yourself is quicker: why not try that?

Some research repositories might take non-peer-reviewed content, and in theory, authors could always put a copy of their work on a personal web-page before peer review if they’re confident in it and just want it out there. There are disciplinary differences in authors’ reactions to this idea. This article in PLOS Biology makes the case for the biology community following in the footsteps of physics, in using pre-print servers to share such early versions. Its authors point out that there are benefits to doing this, including:

Posting manuscripts as preprints also has the potential to improve the quality of science by allowing prepublication feedback from a large pool of reviewers.

Many authors would not share their early manuscripts in this way, because they value peer review as a process of polishing their work. I think this is a reason for peer review to take place in the open, because then it becomes apparent just how important a contribution a peer reviewer might have made to a paper. As I said in my previous post, peer reviewers should get credit for their work, but perhaps I should have made it clear that I’m not talking about it looking good on their CV, or their peer review activity going down well with their Head of Department!

 

Even authors who are happy to share un-polished pre-peer-review versions of their work (aka pre-prints, aka manuscripts) might be wary if it is not the norm in their discipline, because it might prejudice their chances of publication in the big-name journals of their field. Authors will likely have to agree to clauses stating that the work has not previously been published elsewhere. When I worked at the University of Warwick, in the early days of their institutional repository we surveyed a number of big publishers to ask if they would consider repository deposit to constitute prior publication, and thus a breach of this kind of clause in their authors’ agreement. Some said yes, some said no.

This is not such a clear area for authors, and for many it’s not worth the time of enquiring or the risk of finding out the hard way, i.e. through rejection of their article because plagiarism detection software identifies it as previously published online. Researchers need the quality “badge” that a journal gives them, for their CV and their institution’s performance review processes: publishing articles is not all about communication to other researchers, but it is also about kudos.

 

For some authors therefore (I would guess most), the earliest version they might share would be a post-peer-review version (sometimes called a post-print, sometimes called an author’s final version), which if there are no embargo periods from the publisher, would become available at the same time as their article became available through an OnlineFirst scheme.

 

 

Post peer review: commentary and altmetrics

I mentioned post publication peer review in my previous post: I thought about it as an alternative to peer review then, and perhaps I should think about it more as something that is complementary to peer review. Perhaps peer review doesn’t need to be either traditional or post publication but it is already really a process that doesn’t end with publication.

 

There are many ways that researchers are sharing and commenting on each others’ work after it has been published, therefore after the peer review process for traditional articles. We can track these interactions on sites like Researchgate and Mendeley, and through altmetrics software that collates data on such interactions… but altmetrics and its role is a subject I’ve looked at separately already, and it’s one I’m likely to return to again later!

Openness, replication, validation and the mark of quality in research… and knitting!

Whilst knitting (so I make no claim to have comprehensive notes), I watched a great talk on YouTube by the guy who wrote this paper:

Why most published research findings are false
John P A Ionannidis in PLoS Med (2005)

– Hedge fund managers don’t trust science: how do we know which science can be trusted?

– Looks like replication is an important aspect of science, for us to recognise quality

– Negative results should also be shared and lead to acknowledgement of contribution: there is a particular bias towards reporting of positive results in some disciplines. I think he said: “the analysis planned is different to the analysis published about half of the time” amongst the 60 or so research teams who responded to the author by sending requested protocols. And those who responded must presumably be amongst the most conscientious of researchers: the implication is that those who didn’t respond might publish analysis that is not what was originally planned, in more cases.

– Published articles should have published protocols associated with them, and there are a number of top journals who have now agreed that a condition of publication for articles about randomised control trials should have those trials registered already, before publication.

– Journals might have policies (I think: is this a sign of quality for authors choosing where to publish?), but are they always being adhered to? Not necessarily!

– When small studies’ results are published, the sensible thing is to wait for a larger study to confirm the findings.

– Transparency of data is important too. It sounded like he summarised a study where some top researchers tried to re-do the analysis in 18 papers from a top journal, and they could only replicate the results properly in two articles. There were various problems with the others, which ranged from a lack of availability of the data, through use of home-made and unavailable software, to an un-interpretable description of the methods.

– There are five levels for making research more open and more replicable (and thus more validatable?):

  1. Registration of data
  2. Registration of protocols
  3. Registration of analysis plan
  4. Registration of analysis plan and raw data together
  5. open live streaming

My reflection on it all was that my very act of knitting is a metaphor or even example for all of these themes, as my knitting is a form of replication. The knitting pattern was available for download for free on the knitters’ community site Ravelry, which is like open access publication, although you can buy individual patterns there too, and there’s frustrating, out of print stuff from books and magzines, too! Also, on Ravelry you can see pictures and notes from others’ projects that use the designs. This is partly replication, but also open, post publication peer review, as the project notes sometimes point out errors in the instructions. Sometimes, designers then admit to errors and release new versions. It’s also apparent that some designers have already engaged test knitters to try to avoid such a post-publication revision (pre-publication peer review). Some test-knitters might be paid, some are doing a favour for a friend, and some seem to do it for the wool!

I had difficulty interpreting my pattern in one or two places (perhaps because I was watching a fascinating video at the same time!), and had to fall back on my experience/expertise/creativity.*  But finally, I was able to produce a very nice little top, and is that not a form of replication that indicates the quality of the original designer’s work?

* I was using a lace yarn for a top that was designed for worsted yarn, and my gauge with 5mm needles was close but not perfect, so I was destined for a few modifications. I think that this is somewhat akin to data adjustment! And if it was a really negative result, I could list it on Ravelry as an “Ugh”, so I maybe I should suggest that Nature and Science start publishing “Ugh”s, asap!

Here’s a picture of what I knitted:

baccarat