Snowy Stockholm and Nordic Librarians!

Picture from Twitter
Picture from Twitter @Micha2508

Last week I attended Elsevier’s Nordic Library Connect event in Stockholm, Sweden. I presented the metrics poster / card and slide set that I researched for Elsevier already. It’s a great poster but the entire set of metrics take some digesting. Presenting them all as slides in around 30 minutes was not my best idea, even for an audience of librarians! The poster itself was popular though, as it is useful to keep on the wall somewhere to refer to, to refresh your knowledge of certain metrics:

https://libraryconnect.elsevier.com/sites/default/files/ELS_LC_metrics_poster_V2.0_researcher_2016.pdf

I reflected after my talk that I should probably have chosen a few of the metrics to present, and then added more information and context, such as screen captures of where to find these metrics in the wild. It was a very useful experience, not least because it gave me this idea, but also because I got to meet some lovely folks who work in libraries in the Scandinavian countries.

UPDATE 23 Nov 2016: now you can watch a video of my talk (or one of the others) online.

I met these guys... but also real people!
I met these guys… but also real people!

I particularly valued a presentation from fellow speaker, Oliver Renn of ETH, Zurich. He has obviously built up a fantastic relationship with the departments that his library serves. I thought that the menus he offered were inspired. These are explained in the magazine that he also produces for his departments: see p8 of this 2015 edition.

See tweets from the event by clicking on the hashtag in this tweet:

 

Quality checks beyond peer review? Retractions, withdrawals, corrections, etc

I often find myself reading/writing things about whether peer review is working or not, the opportunities for post publication peer review and about the changes needed in scholarly communication. An article in the THE earlier this year described a “secret dossier on research fraud” and the concerns it expresses are familiar, although I balk at the word “fraud”.  The THE article/its source claims that:

scientists and journals are extremely reluctant to retract their papers, even in the face of damning evidence

Perhaps the scientists don’t completely understand the processes that publishers use, nor indeed feel able to influence the consequences to their reputations which they must maintain in order to stand a chance of winning the next research grant and remain employed. I used to give workshops to budding researchers on “how to get published”, when I would explain something of the publishing process to them, and my final slide was all about corrections, errata and retractions: what is the difference between them, and why and how do they occur? (Quick answers below!) Even if the reason for retraction should bring no shame, but honour for admitting a mistake, researchers still don’t want to have an article retracted.

Perhaps in the days of print there was even more reason for stringency in avoiding post-publication alterations: after all, the version of record, the print article, would have been impossible to correct and researchers could only be alerted to any retractions or corrections through metadata records and, perhaps if they were avid readers of a journal then they might spot notices in later editions. However, I do wonder if, in the digital world, there is more room for post-publication alterations without shame, in the name of improving science. This is why it is important for researchers and publishers to work together to define the different categories of such alterations and what do they mean for a researcher’s reputation? There is a lack of clarity, which I think stems partially from a variety of practice with different journals, publishers or even database providers in how they describe and handle the various circumstances in which post-publication alterations are needed.

Corrections, corrigenda and errata are used by journals for minor corrections to a published work, eg name of an author was mis-spelled, or a title not properly capitalised, or also for a minor error in an amount mentioned, eg dosage. These are published in later issues in print, added to metadata records in the digital sphere, and also usually visible in the digital full text with a note in brackets after the corrected item. As a librarian, I’m interested in how this sort of information is transferred in metadata records: the U.S. National Library of Medicine website describes how these are usually all referred to as Errata in PubMed, and their page about this goes on to explain and categorise many different types of t

For me, these are a very good reason to ensure that you read the final published version of an article that you intend to cite: the green OA pre-print version of an article is useful for you to understand the work, but not the one I recommend citing.

Retractions are when an article is withdrawn: this is something that you can do as the author, or indeed your institution could do it on your behalf (sometimes also called a withdrawal, see below), or the editor or publisher of a journal can retract an article. Reasons for retraction of an article include a pervasive (but honest) error in the work, or sometimes might be for unethical practice. I can’t recommend the RetractionWatch blog highly enough for examples and stories of retractions. Sometimes you also hear about a partial retraction which might occur when only one figure or part of the conclusions is withdrawn, whilst the rest of the paper is sound.

Withdrawals are when a paper is no longer included in a publication, often when it has accidentally been published twice. I am increasingly hearing of fees being charged to authors for a withdrawal. Publishers usually have policies about what they consider to be grounds for a withdrawal: see Elsevier’s explanation of withdrawals and retractions, for example.

My explanations are a very light-touch introduction to the subject: publishers’ guidance will give you more of an idea about what might happen to your own articles, but I do see a variety of terminology and practice. My advice to academics is to never make assumptions that work which has been corrected or retracted is necessarily suspect, nor that it should affect a researcher’s reputation unless the whole story is known. Just like the reason why we can’t take bibliometric or altmetric scores as the whole picture of an academic’s worth: we always need context. If we all did this, then there would be no reason for authors to resist retraction, but I know that that is an ideal. Hence the story in the THE which I began with…

 

 

How to speed up publication of your research – and impress journal editors

In my last blogpost I looked at the time it takes to get published, and this led to a brief Twitter chat about how editors’ time gets wasted. Of course there are things that researchers can do to help speed up the whole system, just as there are things that publishers are trying to do. If you’re interested in how to write a great journal article in the first place (which of course, is what will increase your chances of acceptance and therefore speed things up) then you could take a look at some great advice in the Guardian.cards

I’m not looking at writing in this blogpost, rather at the steps to publication that researchers can influence, sometimes for themselves and sometimes more altruistically. I imagine that a board game could be based on the academic publication process, whereby you get cards telling you that you must wait longer, or you get rejected, and sent to the start. Very occasionally you are told that a peer has sped things up for you in some way so that you (and your field) can move on.

Do what you’re told!
It sounds simple, but it’s amazing how many editors report that many authors appear to have not read guidelines before submitting. Wrong word counts, line spacing, no data supplied, wrong reference formats, etc could all result in a desk rejection, thus wasting everyone’s time. A good reference managing tool will ease and expedite reference style reformatting, but even so, matching each journal’s style is a lot of work if you submit the same article to many journals, so perhaps this begins with choosing the right journal (see below).

Also, authors who are re-submitting need to ensure that they respond to ALL the editor’s and reviewers’ recommendations. Otherwise, there might be another round of revisions… or a rejection, setting you back to square one.

Be brief and ‘to the point’ in your correspondence with journal editors
First question to authors: do you really need to write to the editor? Writing to check if their journal is a good match for your article is apparently annoying to journal editors, especially if your email looks like an automated one. If you have a question, be sure that you can’t find the answer on the journal’s website: this way you can save editors’ time so that they use it to make the right publishing decisions. If you want to make a good impression on an editor or seek their opinion then perhaps find a way to meet them personally at a conference. (Tip: if they are on Twitter then they might announce which conferences they are going to!)

Choose the right journal to submit to

I have no magic formula but these steps might help you to decide:

  1. Look for a good subject match. Then whether the type, scale and significance of your work fits the type of material usually published in that journal. In other words, read some of the content recently published in the journal you intend to submit to. Check their calls for papers and see if you match them. And read their guidelines (see above).
  2. Listen to experienced authors. If you know someone with experience of publishing in a particular journal, then perhaps ask them for advice: getting to know the journal you are submitting to is important in helping you to target the right one.
  3. Use bibliometric scores with caution. I have blogged here previously about 12 signs of quality for a journal, and note that I don’t mention the impact factor! My number 1 is about peer review, and I expand on that in this post, below. My number 5 is whether the journal is indexed on Web of Science or Scopus: this is not all about the impact factor either. What it means is that the journal you are considering has passed selection criteria in order to be indexed at all, that your article will be highly discoverable, and that it would contribute to your own h-index as an author. If you really want to use a bibliometric, you could look at the article influence scores, and since this blogpost is about speeding things up, then you could also consider the immediacy index, which indicates how quickly items are cited after publication.
  4. Can’t I just take a sneaky peak at the impact factors? I think this is a last resort! Some people see them as a proxy for a good reputation but after all I’ve read about bibliometrics, I’d rather use my twelve signs. In my last blogpost I reported on a Nature News item, which implied that middle-range impact factor journals are likely to have a faster turn around time, but you’ll have to dig a bit deeper to see if there’s anything in that idea for your discipline. In ny view, if everyone is targetting the top impact factor journals, you can be sure that these journals will have delays and high rejection rates. You might miss the chance to contribute to a “rising star” journal.

Choose a perfect peer reviewer!
At some journals, you may get an option to suggest peer reviewers. I don’t imagine that there are many experts in your field who are so good at time management that they can magically create time, and who already know about and value your work, so you will have to balance your needs with that is on offer. Once again, you should be careful to follow the journal’s directions in suggesting peer reviewers. For example, it’s no good suggesting an expert practitioner as a peer reviewer if the journal explicitly asks for a academics, and you probably can’t suggest your colleague either: read what the journal considers to be appropriate.

Is it the right peer review mechanism?
There are many variations of peer review, and some innovative practice might appeal to you if your main goal is speed of publication, so you could choose a journal that uses one of these modern methods.

Here is a list of some peer review innovations with acceleration in mind:

  1. You may have an option to pay for fast tracked peer review at your journal of choice.
  2. Seek an independent peer review yourself, before submission. The same type of company that journals might turn to if they offer a paid-for fast track peer review may also offer you a report that you can pay for directly. The example I know of is Rubriq.
    You can also ask colleagues or peers for a pre peer review, if you think that they might be willing.
  3. Take advantage of a cascading peer review” gold open access (OA) route, at a publisher which offers that. It’s a shame that OA often appears to be a lower quality option, because publishers say to authors the equivalent of “you’re rejected from this top journal but are invited to submit to our gold OA journal”. Such an invitation doesn’t reflect well the publishers either, because of course gold OA is the one where authors pay a fee or “Article Processing Charge”. However, if your research budget can cover the cost then this can be quicker.
  4. Open reviews: there is a possibility that reviewers will be more thorough if their reviews are publicly seen, so I’m not sure that this will necessarily speed the process up. But if you’re looking for explicit reasons why you’ve been rejected, then such a system could be helpful. PeerJ is a well known example of a journal that does this.
  5. Publish first and opt for post publication peer review. The example often given is F1000, which is really a publishing platform rather than a journal. Here, the research is published first, and labelled as “awaiting peer review”. It is indexed after peer review by places like Pubmed, Scopus, the British Library, etc. F1000 also has open peer review, so the reviews as well as the latest version can be seen. Authors can make revisions after peer review and at any time. An alternative to F1000 is that you can put your draft paper into an open access repository where it will at least be visible/available, and seek peer review through publication in a journal later. However, there are disciplinary differences as to whether this will be acceptable practice or not when you later submit to journals (is it a redundant publication because it’s in a repository?), and indeed whether your pre-print will be effective in claiming your “intellectual territory”. In some disciplines, the fear is that repository papers are not widely seen, so others might scoop you to reach recognised publication. In the sciences this is less likely, since access to equipment and lengthy experiments are not likely to be duplicated in time.

Be a peer reviewer, and be prompt with your responses
I have three steps you can follow, to accelerate even traditional peer review:

  1. When invited to carry out a peer review that you cannot find time for, or you are not the right person then you can quickly say “no”, and perhaps suggest someone else suitable. This will speed things up for your peers and make a good impression on an editor: one day this might be important.
  2. If you say “yes” then you can be prompt and clear: this will support your peers but may also enhance your reputation. Larger publishers may track peer reviewers’ work on a shared (internal only or publicly visible!) system, and you can claim credit yourself somewhere like Publons. (See an earlier blogpost that discusses credit for peer review.)
  3. Are you setting the bar too high? By raising standards ever higher, the time it takes for research to be shared is lengthened. Of course this is also about meeting the quality standards of the journal and thereby setting and maintaining the standards of your discipline. Not an easy balancing task!

Finally, remember that publication is only the beginning of the process: you also have to help your colleagues, peers and practitioners to find out about your article and your work. Some editors and publishers have advice on how to do that too, so I’m sure that it will impress them if you do this!

Rejections, revisions, journal shopping and time… more and more time

I read a great news item from Nature, called “Does it take too long to publish research?” and wanted to highlight it here. In  particular, I thought that early career researchers might relate to the stories of featured researchers’ multiple rejections: there is some consolation in hearing others’ experiences. (Recently rejected authors might also seek advice in a great piece from The Scientist in 2015: Riding out rejection.) Also, I wanted to write my reflections, identifying some reasons for rejection (these appear in bold, throughout, in case you want to scan for them).

Whilst I’m on the topic of rejection stories: a recent episode of Radio 4’s The Life Scientific featured Peter Piot, who described (if I understood correctly) how difficult it was to get his research on HIV published in the 1980s because it was so groundbreaking that reviewers could not accept it. He knew that his findings were important and he persevered. So that could be one reason for rejection: you’re ahead of your field!

(Peter Piot also described his time working for the United Nations, in what was essentially a break from his academic career: if you’re interested in academic career breaks then you could take a look at the Piirus blog!)

Anyway, back to the Nature news item, where I picked up particular themes:

  1. Authors will have been rejected a number of times before they are even peer reviewed: a “desk rejection”. One of the authors featured was glad to finally get revisions after so many rejections without explanation. Without explanation, we can’t know what the editors’ decisions were based on, but as I noted in an earlier post, editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines.
  2. Journals do report on time to publication, but that doesn’t always include the time you’ve spent on revisions. If you resubmit after making revisions then the clock is re-started at the resubmission date, at some journals. Likewise, I have read (or heard: sorry, I can’t find the link) elsewhere that the reported rejection/acceptance rates don’t count papers which are invited for re-submission with revisions, as a rejection. So you might feel rejected when you have to make so many revisions but in statistical terms your paper has not been rejected (yet!). There is still time for it to be rejected after you have resubmitted, of course, and that probably happens more often than you think. Some think that journals are not counting and reporting fairly and I think there is room for improvement but it’s a complex area.
  3. Top journals can afford to be more picky and so the bar seems to have been raised, in terms of requirements for publication (hence increased numbers of authors per paper, who bring more data between them). As the Nature news item says: “Scientists grumble about overzealous critics who always seem to want more, or different, experiments to nail a point.”
  4. Rejections could be as a result of the authors “journal shopping”, whereby they submit to top/high impact journals first and work down a list. This is possibly due to a reliance on the reputation and impact factor of the journal where an article is published by those who hire and fund researchers. Researchers who target journals in the middle range of impact factor seem to stand the best chance of a quick review turnaround, but it seems that researchers are taking the risk of rejection and slower publication in order to stand a chance of appearing in a top journal.
  5. Journal editors and publishers are trying to ensure that the publication process is not slowed down, wherever possible. I’d like to quote one nice example of such attempts: “In 2009, Cell also restricted the amount of supplemental material that could accompany papers as a way to keep requests for “additional, unrelated experiments” at bay.” However, the Nature News item also points out the increased volume of papers to be processed and additional checks that papers might go through these days, for example plagiarism screens, animal welfare reports, competing interest disclosures, etc. Plagiarism screens can be tough: I remember an author telling me about how his paper was rejected for what amounted to self-plagiarism.
  6. The peer review process does take time and at different journals this process might be quicker or slower, but even though (as I’ve previously blogged) there are pressures on peer review system, it is not taking longer than it used to, on average. Neither has the digital world sped it up. The News item goes on to recount some of the innovations around peer review that various journals and publishers are implementing.

This made me think that there’s got to be a project somewhere, for someone to classify the revisions asked for in peer review processes and then count which is the most common. Reasons in my list so far:

  • poorly/not succinctly written (i.e. not intelligible!)
  • too little explanation/text
  • abstract does’t reflect findings
  • ethical issues with the data presented
  • ethical issues with the method
  • method unsuited to question
  • conclusions are over-reaching
  • needs to be set in context of other (specific/non-specific) research & add citations

These could be areas to be revised or indeed, reasons for rejection. I’m sure that there are more issue types and that my list is not complete, so feel free to share some more in the comments.

I know that some authors take the revision suggestions and do not resubmit to the journal that reviewed their article, but withdraw their article from that journal and then submit to one lower on the prestige list, thereby perhaps side-stepping another rejection. And thereby apparently achieving publication more quickly, for the second (or fifth or fifteenth) choice journal could not know of the time that an article spent, awaiting the verdict of a different journal. Perhaps that is why journals prefer to count their publication time from the date of resubmission: they don’t know either, if an article will ever be resubmitted. And is it fair of an author to use a journal’s peer review process to polish their article, but not actually publish with that journal? A complex area, like I said already.

Well, if all this complexity has put you in need of cheering up, then I must recommend the Journal of Universal Rejection to you. If you don’t laugh then you might cry…

Publish then publicise & monitor. Publication is not the end of the process!

Once your journal article or research output has been accepted and published, there are lots of things that you can do to spread the word about it. This blogpost has my own list of the top four ways you could do this (other than putting it on your CV, of course). I also recommend any biologists or visual thinkers to look at:
Lobet, Guillaume (2014): Science Valorisation. figsharehttp://dx.doi.org/10.6084/m9.figshare.1057995
Lobet describes the process as “publish: identify yourself: communicate”, and points out useful tools along the way, including recommending that authors identify themselves in ORCID, ResearchGate, Academia.edu, ImpactStory and LinkedIn. (Such services can create a kind of online, public CV and my favourite for researchers is ORCID.) You may also find that your publisher offers advice on ways to publicise your paper further.

PUBLICISE

1) Talk about it! Share your findings formally at a conference. Mention it in conversations with your peers. Include it in your teaching.

2) Tweet about it! If you’re not on Twitter yourself (or even if you are!) then you could ask a colleague to tweet about it for you. A co-author or the journal editor or publisher might tweet about it, or you could approach a University press officer. If you tweet yourself then you could pin the tweet about your latest paper to your profile on Twitter.

3) Open it up! Add your paper to at least one Open Access repository, such as your institutional repository (they might also tweet about it). This way your paper will be available even to those who don’t subsribe to the journal. You can find an OA repository on ROAR or OpenDOAR. Each repository will have its own community of visitors and ways in which to help people discover your content, so you might choose more than one repository: perhaps one for your paper and one for data or other material associated with it. If you put an object into Figshare, for example, it will be assigned a DOI and that will be really handy for getting Altmetrics measures.

4)Be social! Twitter is one way to do this already, of course. but you could also blog about it, on your own blog or perhaps as a guest post for an existing blog with a large audience already. You could put visual content like slides and infographics into Slideshare, and send out an update via LinkedIn. Choose at least one more social media channel of your choice, for each paper.

MONITOR

  1. Watch download stats for your paper, on your publisher’s website. Measuring the success of casual mentions is difficult, but you can often see a spike in download statistics for a paper, after it has been mentioned at a conference.
  2. Watch Twitter analytics: is your tweet about your paper one of your Top Tweets? You can see how many “engagements” a tweet has, i.e., how many clicks, favourites, re-tweets and replies, etc it accrued. If you use a link shortening service, you should also be able to see how many clicks there have been on your link, and where from. (bit.ly is one of many such shortening services.) This is the measure that I value most. If no-one is clicking to look at your content, then perhaps Twitter is not working for you and you could investigate why not or focus on more efficient channels.
  3. Repositories will often offer you stats about downloads, just like your publisher, and either or both may offer you access to an altmetrics tool. Take a look at these to see more information behind the numbers: who is interested and engaged with your work and how can you use this knowledge? Perhaps it will help you to choose which of the other possible social media channels you might use, as this is where there are others in your discipline who are already engaged with your work.

 

Ultimately, you might be interested in citations rather than engagements on Twitter or even webpage visits or downloads for your paper. It’s hard to draw a definite connection between such online activity and citations for journal papers, but I’m pretty sure that no-one is going to cite your paper if they don’t even know it exists, so if this is important to you, then I would say, shout loud!

Ensuring quality and annotating scientific publications. A summary of a Twitter chat

Screenshot of twitter conversation
Tweet tweet!

Last year (yes, I’m slow to blog!), I had a very productive conversation (or couple of conversations) on Twitter with a former colleague & scientist at the University of Warwick, Andrew Marsh, which are worth documenting here as a way to give them a narrative, and to illustrate how Twitter sometimes works.

Back in November 2015, Andrew tweeted to ask who would sign reviews of manuscripts, when reporting on a presentation by Chief Editor of Nature Chemistry,  Stuart Cantrill. I replied on Twitter by asking whether such openness would make the reviewers take more time over their reviews (thereby slowing peer review down). I wondered whether openness would make reviewers less direct and so therefore possibly less helpful as more open to interpretation. Also, whether such open criticisim would drive authors to engage in even more “pre-submission”, informal peer reviewing.

Andrew tells me that, at the original event “a show of hands and brief discussion in the room revealed that PIs or those who peer reviewed manuscripts regularly, declared themselves happy to reveal their identity whereas PhD students or less experienced researchers felt either unsure or uncomfortable in doing so.”

Our next chat was kick-started when Andrew pointed me to a news article from Nature that highlighted a new tool for annotating web pages, Hypothes.is. In our Twitter chat that ensued we considered:

  1. Are such annotations a kind of post-publication peer review? I think that they can work alongside traditional peer review, but as Andrew pointed out, they lack structure so they’re certainly no substitute.
  2. Attribution of such comments is important so that readers would know whose comments they are reading, and also possibly enable tracking of such activity, so that the work could be measured. Integration with ORCID would be a good way to attribute comments. (This is already planned, it seems: Dan Whaley picked up on our chat here!)
  3. Andrew wondered whether tracking of such comments could be done for altmetrics. Altmetric.com responded. Comments on Hypothes.is could signal scholarly attention for the work which they comment on, or indeed attract attention themselves. It takes a certain body of work before measuring comments from such a source becomes valuable, but does measuring itself incentivise researchers to comment? I’m really interested in the latter point: motivation cropped up in an earlier blogpost of mine on peer review. I suspect that researchers will say that measurement does not affect them, but I’m also sure that some of those are well aware of, eg their ResearchGate score!
  4. Such a tool offers a function similar to marginalia and scrawls in library books. Some are helpful shortcuts (left by altruists, or just those who wanted to help their future selves?!), some are rubbish (amusing at their best), and sometimes you recognise the handwriting of an individual who makes useful comments, hence the importance of attribution.
  5. There are also some similarities with social bookmarking and other collaboration tools online, where you can also publish reviews or leave comments on documents and publications.

And who thought that you couldn’t have meaningful conversations on Twitter?! You can also read responses on Twitter to eLife‘s tweet about its piece on the need for open peer review.

The best part of this conversation between Andrew and me on Twitter was the ability to bring in others, by incorporating their Twitter handles. We also picked up on what others were saying, like this tweet about journal citation distributions from Stephen Curry. The worst parts were trying to be succinct when making a point (and wanting to develop some points); feeling a need to collate the many points raised and forgetting to flag people sometimes.

Just as well you can also blog about these things, then!

 

Who was at the Frankfurt book fair?

Many international publishersI recently wrote about three particularly German things that I spotted at the Frankfurt book fair, but there was so much there so here is another blogpost full of pictures… Here is a quick run-through of who I spotted at the book fair, with photos!

Of course, the Frankfurt book fair is huge: the exhibition space is much bigger than Online Information, or the UKSG conference, which is the closest thing to it that I’ve attended in the past. And it is more properly called the International Frankfurt Book Fair! Some international publishers were to be found in the halls for their country, where you could hear their language being spoken all around, whilst others were scattered in other halls matching their content rather than their nation, like this one in the academic publishing hall.

Specialist book publishers
Specialist book publishers

There was a great deal of variety of types of book represented at the fair, and all things book related. Those seeking something special could find beautiful facsimiles, or antique works, but the section of the fair dedicated to antiquities was guarded by extra security: you had to leave coats and bags behind to go in, so I didn’t: after all, I’m not in a position to invest in or be guardian of such treasures, and there was so much else to see.

Another area of the fair that had extra security was a hall that was apparently new for this year, where literary agents gathered for pre-booked meetings only. I wonder what was going on behind those screens? Agents selling books to publishers and negotiating terms, I imagine. The whole fair has the atmosphere of high-stakes deals, and people going about important business, not just in the exhibition halls but all around the site. There were publishers doing deals with libraries and bookshops, and technology providers with services for the publishers or with products for readers directly. There were education tool providers, and also companies who sell all the extras that you can find in bookshops like stationery and gifts: many of these stalls were making individual item sales at the book fair, too, so you could pick up a present for your loved ones.

Not just books: gift providers, too
Not just books: gift providers, too

I spent most of my time in the hall for scientific and academic publishing, but I did walk through other halls, and spotted many art publishers and stalls for children’s books and comic books which had some highly creative and attractive displays: these were really inspiring and made me feel proud to be a part of this information world, with just a little pang of regret that the academic world is so much less aesthetic and so much more serious looking! Ah well, the academic information world is full of really interesting challenges, and I was really pleased to see that a German Library school was amongst the stalls in the education area, recruiting students to degree programmes in librarianship and information science.

Publishers of children's books
Publishers of children’s books

There was so much to see, across so many different enormous conference halls that it was quite possible to be lost in the indoors world of the exhibition centre, and to forget the world outside… sometimes it seemed as though the whole world was at the Frankfurt book fair!

 

A rare glimpse of the outside world, from within the Exhibition centre at Frankfurt.
A rare glimpse of the outside world, from within the Exhibition centre at Frankfurt.

 

 

Academic blogs: they risk plagiarism, don’t they? Three key aspects to consider.

After attending the Digital Academic event in Warwick on 23 March, on behalf of Piirus, I reflected on one of the conversations relating to plagiarism. Should researchers worry about plagiarism, if they begin to blog? Here are my thoughts on three important aspects of this concern:

1.  Hey, that was my idea!

There are academics who would not put their ideas into a blog post, because releasing them into the open is to run the risk that others will get a journal article or book out on the ideas before they do. And its the journal articles and books that are the real currency of academic reputation, not blog posts. The argument against this concern is that a scholarly idea would be based upon substantial research: how could others re-do your research and publish before you? But there are times (perhaps discipline dependent) when a particular phrase or way of interpreting known research is what really makes a research output “zing”, and others could steal such a phrase or perspective.

Other researchers take the view that, if you blogged your idea, then you already claimed it as your own, so blogging is actually protection against plagiarism. This is great in theory: it’s publicly seen to be yours and so not even those with low moral standards would risk their reputations by plagiarising it. And if they did, you can prove that the material was yours first, with the date of your blog post’s publication.

Another reason to blog your ideas first, are apocryphal tales of papers languishing in peer review for just long enough for the referee to get their own paper on the same theme published. A quick blog post about your recent submission to a journal could be in your best interests!

However, we often say in English that “great minds think alike”, so in a case of apparent plagiarism, it might be just that someone else happened on the same idea. Your complaint that it is plagiarism might never be heard, or might be seen as sour grapes over what is mere bad luck. If you never let your idea out in the first place, you could at least be sure in such a scenario that it was just bad luck. On the other hand, if you blogged your idea then perhaps the person who stumbled on it too would get in touch and together you could create a richer, collaborative research output. Perhaps!

I can only conclude from these perspectives and scenarios that reaching the right audience at the right time is really crucial, and how you choose to do this will be a personal and discipline-specific decision. This is nothing new, but now there is the blog as a possible channel too. For some authors the only way to reach the right audience is in traditional journals, so those “zing” ideas are omitted from their blog, but that doesn’t mean that they can’t blog too! Maybe they could use a blog to promote a paper or book after publication. Blogs can be a great way to provide “teaser” content for a book, to promote it, if your publisher approves.

2.  Traditional publishers can provide protection

Some authors feel safer when their output is taken on by an established organisation, rather than releasing their work in what is essentially self-publishing through a blog. Even if you could prove that someone has plagiarised your work (from a blog, a journal article, a conference paper or any source), then you would need the scholarly community to recognise that someone else had committed bad practice, to get any kind of redress. To achieve that recognition could take considerable energy, time and resources to even attempt to achieve.

If your idea was first published by a society or publisher then they might have processes and resources with which to negotiate with the producer of the plagiarising article, and so provide you with support in your complaint. It is possible, but of course not guaranteed that you will find this supportive: your interests and the publisher’s interests might not coincide.

A case of plagiarism may also be a breach of copyright, and you may have the option of engaging a lawyer to defend your copyright. But remember that copyright law is all about the right to make money from your intellectual output. Perhaps a publisher will protect your work by way of protecting their own income: they will certainly understand commercial aspects, but of course their interests and yours might differ.

3.  Rejected for self-plagiarism

It could happen: your journal article is submitted to the most prestigious journal in your field and you get a rejection because substantial chunks of the the content is found to appear elsewhere. Or perhaps worse: your article is published but then retracted as it is recognised as a redundant publication, with content that has previously been published. What a mess!

Of course, this regrettable situation could happen from one journal article to the next and not only from blog content. In fact, if your blog is aimed at a different audience, then you’re less likely to inadvertently repeat phrases in what amounts to self-plagiarism than when writing traditional outputs. There is also always the option of saving your blog post for after the publication has come out.

Final thoughts

The risk of plagiarism from others reading your blog post is no worse than when you have a conversation with someone at a conference, and in fact openness can lead to collaborations and other benefits, which is why that conference conversation might have happened in the first place. The risk of plagiarism is one that you need to weigh for yourself, and as the speakers at the Digital Academic event described, blogging brings opportunities that traditional publications alone might not do, so that risk might be one worth taking.

Further thoughts on Peer Review & speeding up traditional journal publication

Back in January, I wrote about Peer Review. It’s a big topic! Here are some more reflections, following on from my last blog post about it.

Speeding things up, in journal article publication. (On “Peer review takes a very long time”)

picture of a pocket watch

I wrote that peer review “takes a very long time” because many scholars want to get their work out there to be read, as soon as possible. Of course, this is a loose concept and “a very long time” is relative. Some might think that I am criticising publishers for being slow, but I’m not pointing the finger of blame! I know that publishers have been addressing the issue and peer review has sped up in recent times, especially since there is now software to can help track it: SPARC has a handy round-up of manuscript submission software. However, the peer reviewers themselves must respond and they are under a lot of pressure. The system can only be as fast as the slowest reviewer, and there are all sorts of (entirely understandable) circumstances that might slow an individual down.

I should take a look at some of the developments that have helped to speed up traditional scholarly communication, though:

Scholarly publishers have invested in initiatives like Sage’s OnlineFirst to help peer reviewed research articles to reach audiences before journal issues are complete, thus cutting publication waiting periods.

Some publishers have also introduced mega journals with cascading peer review systems, which are also often based on Gold Open Access. Impact Story’s blog has a great post about how authors can make the most of these types of journal.  These speed up an article’s time to publication because after a peer review that led to rejection from one title, your paper can get fast-tracked through to publication in the next “tier” title at the same publisher, without the need to submit again and start the process from the very beginning.

And of course, as a librarian I should mention the sophisticated alerting services that help researchers to find out about each others’ papers as soon as possible: researchers are no longer dependent on the print copy landing on their desk, and finding the time to browse through the table of contents!

Putting it online yourself is quicker: why not try that?

Some research repositories might take non-peer-reviewed content, and in theory, authors could always put a copy of their work on a personal web-page before peer review if they’re confident in it and just want it out there. There are disciplinary differences in authors’ reactions to this idea. This article in PLOS Biology makes the case for the biology community following in the footsteps of physics, in using pre-print servers to share such early versions. Its authors point out that there are benefits to doing this, including:

Posting manuscripts as preprints also has the potential to improve the quality of science by allowing prepublication feedback from a large pool of reviewers.

Many authors would not share their early manuscripts in this way, because they value peer review as a process of polishing their work. I think this is a reason for peer review to take place in the open, because then it becomes apparent just how important a contribution a peer reviewer might have made to a paper. As I said in my previous post, peer reviewers should get credit for their work, but perhaps I should have made it clear that I’m not talking about it looking good on their CV, or their peer review activity going down well with their Head of Department!

 

Even authors who are happy to share un-polished pre-peer-review versions of their work (aka pre-prints, aka manuscripts) might be wary if it is not the norm in their discipline, because it might prejudice their chances of publication in the big-name journals of their field. Authors will likely have to agree to clauses stating that the work has not previously been published elsewhere. When I worked at the University of Warwick, in the early days of their institutional repository we surveyed a number of big publishers to ask if they would consider repository deposit to constitute prior publication, and thus a breach of this kind of clause in their authors’ agreement. Some said yes, some said no.

This is not such a clear area for authors, and for many it’s not worth the time of enquiring or the risk of finding out the hard way, i.e. through rejection of their article because plagiarism detection software identifies it as previously published online. Researchers need the quality “badge” that a journal gives them, for their CV and their institution’s performance review processes: publishing articles is not all about communication to other researchers, but it is also about kudos.

 

For some authors therefore (I would guess most), the earliest version they might share would be a post-peer-review version (sometimes called a post-print, sometimes called an author’s final version), which if there are no embargo periods from the publisher, would become available at the same time as their article became available through an OnlineFirst scheme.

 

 

Post peer review: commentary and altmetrics

I mentioned post publication peer review in my previous post: I thought about it as an alternative to peer review then, and perhaps I should think about it more as something that is complementary to peer review. Perhaps peer review doesn’t need to be either traditional or post publication but it is already really a process that doesn’t end with publication.

 

There are many ways that researchers are sharing and commenting on each others’ work after it has been published, therefore after the peer review process for traditional articles. We can track these interactions on sites like Researchgate and Mendeley, and through altmetrics software that collates data on such interactions… but altmetrics and its role is a subject I’ve looked at separately already, and it’s one I’m likely to return to again later!